skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.


Title: What governs attitudes toward artificial intelligence adoption and governance?
Abstract

Designing effective and inclusive governance and public communication strategies for artificial intelligence (AI) requires understanding how stakeholders reason about its use and governance. We examine underlying factors and mechanisms that drive attitudes toward the use and governance of AI across six policy-relevant applications using structural equation modeling and surveys of both US adults (N = 3,524) and technology workers enrolled in an online computer science master’s degree program (N = 425). We find that the cultural values of individualism, egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes. Perceived benefit drives attitudes toward AI use but not its governance. Experts hold more nuanced views than the public and are more supportive of AI use but not its regulation. Drawing on these findings, we discuss challenges and opportunities for participatory AI governance, and we recommend that trustworthy AI governance be emphasized as strongly as trustworthy AI.

 
more » « less
Award ID(s):
2107455
NSF-PAR ID:
10375944
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Science and Public Policy
Volume:
50
Issue:
2
ISSN:
0302-3427
Format(s):
Medium: X Size: p. 161-176
Size(s):
["p. 161-176"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Automatic emotion recognition (ER)-enabled wellbeing interventions use ER algorithms to infer the emotions of a data subject (i.e., a person about whom data is collected or processed to enable ER) based on data generated from their online interactions, such as social media activity, and intervene accordingly. The potential commercial applications of this technology are widely acknowledged, particularly in the context of social media. Yet, little is known about data subjects' conceptualizations of and attitudes toward automatic ER-enabled wellbeing interventions. To address this gap, we interviewed 13 US adult social media data subjects regarding social media-based automatic ER-enabled wellbeing interventions. We found that participants' attitudes toward automatic ER-enabled wellbeing interventions were predominantly negative. Negative attitudes were largely shaped by how participants compared their conceptualizations of Artificial Intelligence (AI) to the humans that traditionally deliver wellbeing support. Comparisons between AI and human wellbeing interventions were based upon human attributes participants doubted AI could hold: 1) helpfulness and authentic care; 2) personal and professional expertise; 3) morality; and 4) benevolence through shared humanity. In some cases, participants' attitudes toward automatic ER-enabled wellbeing interventions shifted when participants conceptualized automatic ER-enabled wellbeing interventions' impact on others, rather than themselves. Though with reluctance, a minority of participants held more positive attitudes toward their conceptualizations of automatic ER-enabled wellbeing interventions, citing their potential to benefit others: 1) by supporting academic research; 2) by increasing access to wellbeing support; and 3) through egregious harm prevention. However, most participants anticipated harms associated with their conceptualizations of automatic ER-enabled wellbeing interventions for others, such as re-traumatization, the spread of inaccurate health information, inappropriate surveillance, and interventions informed by inaccurate predictions. Lastly, while participants had qualms about automatic ER-enabled wellbeing interventions, we identified three development and delivery qualities of automatic ER-enabled wellbeing interventions upon which their attitudes toward them depended: 1) accuracy; 2) contextual sensitivity; and 3) positive outcome. Our study is not motivated to make normative statements about whether or how automatic ER-enabled wellbeing interventions should exist, but to center voices of the data subjects affected by this technology. We argue for the inclusion of data subjects in the development of requirements for ethical and trustworthy ER applications. To that end, we discuss ethical, social, and policy implications of our findings, suggesting that automatic ER-enabled wellbeing interventions imagined by participants are incompatible with aims to promote trustworthy, socially aware, and responsible AI technologies in the current practical and regulatory landscape in the US. 
    more » « less
  2. Mahmoud, Ali B. (Ed.)

    Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and “100 years from now.” (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately.

     
    more » « less
  3. How can the public sector use AI ethically and responsibly for the benefit of people? The sustainable development and deployment of artificial intelligence (AI) in the public sector requires dialogue and deliberation between developers, decision makers, deployers, end users, and the public. This paper contributes to the debate on how to develop persuasive government approaches for steering the development and use of AI. We examine the ethical issues and the role of the public in the debate on developing public sector governance of socially and democratically sustainable and technology-intensive societies. To concretize this discussion, we study the co-development of a Finnish national AI program AuroraAI, which aims to provide citizens with tailored and timely services for different life situations, utilizing AI. With the help of this case study, we investigate the challenges posed by the development and use of AI in the service of public administration. We draw particular attention to the efforts made by the AuroraAI Ethics Board in deliberating the AuroraAI solution options and working toward a sustainable and inclusive AI society. 
    more » « less
  4. Artificial Intelligence (AI) is a transformative force in communication and messaging strategy, with potential to disrupt traditional approaches. Large language models (LLMs), a form of AI, are capable of generating high-quality, humanlike text. We investigate the persuasive quality of AI-generated messages to understand how AI could impact public health messaging. Specifically, through a series of studies designed to characterize and evaluate generative AI in developing public health messages, we analyze COVID-19 pro-vaccination messages generated by GPT-3, a state-of-the-art instantiation of a large language model. Study 1 is a systematic evaluation of GPT-3's ability to generate pro-vaccination messages. Study 2 then observed peoples' perceptions of curated GPT-3-generated messages compared to human-authored messages released by the CDC (Centers for Disease Control and Prevention), finding that GPT-3 messages were perceived as more effective, stronger arguments, and evoked more positive attitudes than CDC messages. Finally, Study 3 assessed the role of source labels on perceived quality, finding that while participants preferred AI-generated messages, they expressed dispreference for messages that were labeled as AI-generated. The results suggest that, with human supervision, AI can be used to create effective public health messages, but that individuals prefer their public health messages to come from human institutions rather than AI sources. We propose best practices for assessing generative outputs of large language models in future social science research and ways health professionals can use AI systems to augment public health messaging.

     
    more » « less
  5. Background and Aims

    Our ability to combat the opioid epidemic depends, in part, on dismantling the stigma that surrounds drug use. However, this epidemic has been unique and, to date, we have not understood the nature of public prejudices associated with it. Here, we examine the nature and magnitude of public stigma toward prescription opioid use disorder (OUD) using the only nationally representative data available on this topic.

    Design

    General Social Survey (GSS), a cross‐sectional, nationally representative survey of public attitudes.

    Setting

    United States, 2018.

    Participants/Cases

    A total of 1169 US residents recruited using a probability sample.

    Measurements

    Respondents completed a vignette‐based survey experiment to assess public stigma toward people who develop OUD following prescription of opioid analgesics. This condition is compared with depression, schizophrenia, alcohol use disorder (AUD) and subclinical distress using multivariable logistic or linear regression.

    Findings

    Adjusting for covariates (e.g. race, age, gender), US residents were significantly more likely to label symptoms of OUD a physical illness [73%, confidence interval (CI) = 66–80%;P < 0.001] relative to all other conditions, and less likely to label OUD a mental illness (40%, CI = 32–48%;P < 0.001). OUD was significantly less likely to be attributed to bad character (37%, CI = 30–44%;P < 0.001) or poor upbringing (17%, CI = 12–23%;P < 0.001) compared with AUD. Nonetheless, perceptions of competence associated with OUD (e.g. ability to manage money; 41%, CI = 33–49%;P < 0.01) were lower than AUD, depression and subclinical distress. Moreover, willingness to socially exclude people with OUD was very high (e.g. 76% of respondents do not want to work with a person with OUD), paralleling findings on traditional targets of strong stigma (i.e. AUD and schizophrenia).

    Conclusions

    US residents do not typically hold people with prescription opioid use disorder responsible for their addiction, but they express high levels of willingness to subject them to social exclusion.

     
    more » « less