Abstract As artificial intelligence (AI) grows in popularity and importance—both as a domain within broader computing research and in society at large—increasing focus will need to be paid to the ethical governance of this emerging technology. The attitudes and competencies with respect to AI ethics and policy among post-secondary students studying computer science (CS) are of particular interest, as many of these students will go on to play key roles in the development and deployment of future AI innovations. Despite this population of computer scientists being at the forefront of learning about and using AI tools, their attitudes towards AI remain understudied in the literature. In an effort to begin to close this gap, in fall 2024 we fielded a survey ($$n=117$$) to undergraduate and graduate students enrolled in CS courses at a large public university in the United States to assess their attitudes towards the nascent fields of AI ethics and policy. Additionally, we conducted one-on-one follow-up interviews with 13 students to elicit more in-depth responses on topics such as the use of AI tools in the classroom, ethical impacts of AI, and government regulation of AI. In this paper, we describe the findings of our exploratory study, drawing parallels and contrasts to broader public opinion polling in the United States. We conclude by evaluating the implications of CS student attitudes on the future of AI education and governance.
more »
« less
This content will become publicly available on April 1, 2026
Perceptions of AI Ethics Policies Among Scientists and Engineers in Policy-Related Roles: An Exploratory Investigation
Abstract The explosive growth of artificial intelligence (AI) over the past few years has focused attention on how diverse stakeholders regulate these technologies to ensure their safe and ethical use. Increasingly, governmental bodies, corporations, and nonprofit organizations are developing strategies and policies for AI governance. While existing literature on ethical AI has focused on the various principles and guidelines that have emerged as a result of these efforts, just how these principles are operationalized and translated to broader policy is still the subject of current research. Specifically, there is a gap in our understanding of how policy practitioners actively engage with, contextualize, or reflect on existing AI ethics policies in their daily professional activities. The perspectives of these policy experts towards AI regulation generally are not fully understood. To this end, this paper explores the perceptions of scientists and engineers in policy-related roles in the US public and nonprofit sectors towards AI ethics policy, both in the US and abroad. We interviewed 15 policy experts and found that although these experts were generally familiar with AI governance efforts within their domains, overall knowledge of guiding frameworks and critical regulatory policies was still limited. There was also a general perception among the experts we interviewed that the US lagged behind other comparable countries in regulating AI, a finding that supports the conclusion of existing literature. Lastly, we conducted a preliminary comparison between the AI ethics policies identified by the policy experts in our study and those emphasized in existing literature, identifying both commonalities and areas of divergence.
more »
« less
- Award ID(s):
- 2412398
- PAR ID:
- 10621243
- Publisher / Repository:
- Springer Nature
- Date Published:
- Journal Name:
- Digital Society
- Volume:
- 4
- Issue:
- 1
- ISSN:
- 2731-4650
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Datasets carry cultural and political context at all parts of the data life cycle. Historically, Earth science data repositories have taken their guidance and policies as a combination of mandates from their funding agencies and the needs of their user communities, typically universities, agencies, and researchers. Consequently, repository practices have rarely taken into consideration the needs of other communities such as the Indigenous Peoples on whose lands data are often acquired. In recent years, a number of global efforts have worked to improve the conduct of research as well as data policy and practices by the repositories that hold and disseminate it. One of these established the CARE Principles for Indigenous Data Governance (Carroll et al. 2020), representing ‘Collective Benefit’, ‘Authority to Control’, ‘Responsibility’, and ‘Ethics”’ hosted by the Global Indigenous Data Alliance (GIDA 2023a). In order to align to the CARE Principles, repositories may need to update their policies, architecture, service offerings, and their collaboration models. The question is how? Operationalizing principles into active repositories is generally a fraught process. This paper captures perspectives and recommendations from many of the repositories that are members of the Earth Science Information Partners (ESIPFed, n.d.) in conjunction with members of the Collaboratory for Indigenous Data Governance (Collaboratory for Indigenous Data Governance n.d.) and GIDA, defines and prioritizes the set of activities Earth and Environmental repositories can take to better adhere to CARE Principles in the hopes that this will help implementation in repositories globally.more » « less
-
The objective of this paper is to establish the fundamental public value principles that should govern safe and trusted artificial intelligence (AI). Public value is a dynamic concept that encompasses several dimensions. AI itself has evolved quite rapidly in the last few years, especially with the swift escalation of Generative AI. Governments around the world are grappling with how to govern AI, just as technologists ring alarm bells about the future consequences of AI. Our paper extends the debate on AI governance that is focused on ethical values of beneficence to that of economic values of public good. Viewed as a public good, AI use is beyond the control of the creators. Towards this end, the paper examined AI policies in the United States and Europe. We postulate three principles from a public values perspective: (i) ensuring security and privacy of each individual (or entity); (ii) ensuring trust in AI systems is verifiable; and (iii) ensuring fair and balanced AI protocols, wherein the underlying components of data and algorithms are contestable and open to public debate.more » « less
-
How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products.more » « less
-
Background: Studies of changes in engineering students’ perceptions of ethics and social responsibility over time have often resulted in mixed results or shown only small longitudinal shifts. Comparisons across different studies have been difficult due to the diverse frameworks that have been used for measurement and analysis in research on engineering ethics and have revealed major gaps between the measurement tools and instruments available to assess engineering ethics and the complexity of ethical and social responsibility constructs. Purpose/Hypothesis: The purpose of this study was to understand how engineering students’ views of ethics and social responsibility change over the four years of their undergraduate degrees and to explore the use of reflexive principlism as an organizing framework for analyzing these changes. Design/Method: We used qualitative interviews of engineering students to explore multiple facets of their understanding of ethics and social responsibility. We interviewed 33 students in their first and fourth years of their undergraduate studies. We then inductively analyzed the pairs of interviews, using the reflexive principlism framework to formulate our findings. Results: We found that engineering students in their fourth year of studies were better able to engage in balancing across multiple ethical principles and specification of said ethical principles than they could as first year students. They most frequently referenced nonmaleficence and, to a lesser degree, beneficence as relevant ethical principles at both time points, and were much less likely to reference justice and autonomy. Conclusions: This work shows the potential of using reflexive principlism as an analytical framework to illuminate the nuanced ways that engineering students’ views of ethics and social responsibility change and develop over time. Our findings suggest reflexive principlism may also be useful as a pedagogical approach to better equip students to specify and balance all four principles when ethical situations arise.more » « less
An official website of the United States government
