skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 1, 2026

Title: Perceptions of AI Ethics Policies Among Scientists and Engineers in Policy-Related Roles: An Exploratory Investigation
Abstract The explosive growth of artificial intelligence (AI) over the past few years has focused attention on how diverse stakeholders regulate these technologies to ensure their safe and ethical use. Increasingly, governmental bodies, corporations, and nonprofit organizations are developing strategies and policies for AI governance. While existing literature on ethical AI has focused on the various principles and guidelines that have emerged as a result of these efforts, just how these principles are operationalized and translated to broader policy is still the subject of current research. Specifically, there is a gap in our understanding of how policy practitioners actively engage with, contextualize, or reflect on existing AI ethics policies in their daily professional activities. The perspectives of these policy experts towards AI regulation generally are not fully understood. To this end, this paper explores the perceptions of scientists and engineers in policy-related roles in the US public and nonprofit sectors towards AI ethics policy, both in the US and abroad. We interviewed 15 policy experts and found that although these experts were generally familiar with AI governance efforts within their domains, overall knowledge of guiding frameworks and critical regulatory policies was still limited. There was also a general perception among the experts we interviewed that the US lagged behind other comparable countries in regulating AI, a finding that supports the conclusion of existing literature. Lastly, we conducted a preliminary comparison between the AI ethics policies identified by the policy experts in our study and those emphasized in existing literature, identifying both commonalities and areas of divergence.  more » « less
Award ID(s):
2412398
PAR ID:
10621243
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer Nature
Date Published:
Journal Name:
Digital Society
Volume:
4
Issue:
1
ISSN:
2731-4650
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract As artificial intelligence (AI) grows in popularity and importance—both as a domain within broader computing research and in society at large—increasing focus will need to be paid to the ethical governance of this emerging technology. The attitudes and competencies with respect to AI ethics and policy among post-secondary students studying computer science (CS) are of particular interest, as many of these students will go on to play key roles in the development and deployment of future AI innovations. Despite this population of computer scientists being at the forefront of learning about and using AI tools, their attitudes towards AI remain understudied in the literature. In an effort to begin to close this gap, in fall 2024 we fielded a survey ($$n=117$$) to undergraduate and graduate students enrolled in CS courses at a large public university in the United States to assess their attitudes towards the nascent fields of AI ethics and policy. Additionally, we conducted one-on-one follow-up interviews with 13 students to elicit more in-depth responses on topics such as the use of AI tools in the classroom, ethical impacts of AI, and government regulation of AI. In this paper, we describe the findings of our exploratory study, drawing parallels and contrasts to broader public opinion polling in the United States. We conclude by evaluating the implications of CS student attitudes on the future of AI education and governance. 
    more » « less
  2. As generative AI technologies proliferate across higher education, many U.S. universities are still developing institutional policies to address their ethical, pedagogical, and accessibility implications. This posIT column critically examines AI policies and resources at 50 four year universities—one from each U.S. state—to assess alignment with the Association of Research Libraries’ (ARL) Guiding Principles for Artificial Intelligence. Through content analysis of LibGuides, AI taskforce membership, campus events, and public-facing policies, the study reveals widespread adoption of AI resources but a significant lack of clarity, consistency, and librarian involvement in policy development. While most institutions meet baseline criteria related to privacy, plagiarism, and algorithmic transparency, fewer address AI’s potential harms to marginalized communities or its impact on accessibility for students with disabilities. Notably, fewer than half of the AI taskforces surveyed included library staff, despite librarians’ expertise in digital literacy and ethical information use. This column urges academic librarians to actively seek leadership roles in institutional AI governance to help shape inclusive, responsible, and human-centered AI policy frameworks. 
    more » « less
  3. Datasets carry cultural and political context at all parts of the data life cycle. Historically, Earth science data repositories have taken their guidance and policies as a combination of mandates from their funding agencies and the needs of their user communities, typically universities, agencies, and researchers. Consequently, repository practices have rarely taken into consideration the needs of other communities such as the Indigenous Peoples on whose lands data are often acquired. In recent years, a number of global efforts have worked to improve the conduct of research as well as data policy and practices by the repositories that hold and disseminate it. One of these established the CARE Principles for Indigenous Data Governance (Carroll et al. 2020), representing ‘Collective Benefit’, ‘Authority to Control’, ‘Responsibility’, and ‘Ethics”’ hosted by the Global Indigenous Data Alliance (GIDA 2023a). In order to align to the CARE Principles, repositories may need to update their policies, architecture, service offerings, and their collaboration models. The question is how? Operationalizing principles into active repositories is generally a fraught process. This paper captures perspectives and recommendations from many of the repositories that are members of the Earth Science Information Partners (ESIPFed, n.d.) in conjunction with members of the Collaboratory for Indigenous Data Governance (Collaboratory for Indigenous Data Governance n.d.) and GIDA, defines and prioritizes the set of activities Earth and Environmental repositories can take to better adhere to CARE Principles in the hopes that this will help implementation in repositories globally. 
    more » « less
  4. The objective of this paper is to establish the fundamental public value principles that should govern safe and trusted artificial intelligence (AI). Public value is a dynamic concept that encompasses several dimensions. AI itself has evolved quite rapidly in the last few years, especially with the swift escalation of Generative AI. Governments around the world are grappling with how to govern AI, just as technologists ring alarm bells about the future consequences of AI. Our paper extends the debate on AI governance that is focused on ethical values of beneficence to that of economic values of public good. Viewed as a public good, AI use is beyond the control of the creators. Towards this end, the paper examined AI policies in the United States and Europe. We postulate three principles from a public values perspective: (i) ensuring security and privacy of each individual (or entity); (ii) ensuring trust in AI systems is verifiable; and (iii) ensuring fair and balanced AI protocols, wherein the underlying components of data and algorithms are contestable and open to public debate. 
    more » « less
  5. How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products. 
    more » « less