skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 21, 2026

Title: Assessing computer science student attitudes towards AI ethics and policy
Abstract As artificial intelligence (AI) grows in popularity and importance—both as a domain within broader computing research and in society at large—increasing focus will need to be paid to the ethical governance of this emerging technology. The attitudes and competencies with respect to AI ethics and policy among post-secondary students studying computer science (CS) are of particular interest, as many of these students will go on to play key roles in the development and deployment of future AI innovations. Despite this population of computer scientists being at the forefront of learning about and using AI tools, their attitudes towards AI remain understudied in the literature. In an effort to begin to close this gap, in fall 2024 we fielded a survey ($$n=117$$) to undergraduate and graduate students enrolled in CS courses at a large public university in the United States to assess their attitudes towards the nascent fields of AI ethics and policy. Additionally, we conducted one-on-one follow-up interviews with 13 students to elicit more in-depth responses on topics such as the use of AI tools in the classroom, ethical impacts of AI, and government regulation of AI. In this paper, we describe the findings of our exploratory study, drawing parallels and contrasts to broader public opinion polling in the United States. We conclude by evaluating the implications of CS student attitudes on the future of AI education and governance.  more » « less
Award ID(s):
2418867
PAR ID:
10632208
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
AI and Ethics
ISSN:
2730-5953
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The explosive growth of artificial intelligence (AI) over the past few years has focused attention on how diverse stakeholders regulate these technologies to ensure their safe and ethical use. Increasingly, governmental bodies, corporations, and nonprofit organizations are developing strategies and policies for AI governance. While existing literature on ethical AI has focused on the various principles and guidelines that have emerged as a result of these efforts, just how these principles are operationalized and translated to broader policy is still the subject of current research. Specifically, there is a gap in our understanding of how policy practitioners actively engage with, contextualize, or reflect on existing AI ethics policies in their daily professional activities. The perspectives of these policy experts towards AI regulation generally are not fully understood. To this end, this paper explores the perceptions of scientists and engineers in policy-related roles in the US public and nonprofit sectors towards AI ethics policy, both in the US and abroad. We interviewed 15 policy experts and found that although these experts were generally familiar with AI governance efforts within their domains, overall knowledge of guiding frameworks and critical regulatory policies was still limited. There was also a general perception among the experts we interviewed that the US lagged behind other comparable countries in regulating AI, a finding that supports the conclusion of existing literature. Lastly, we conducted a preliminary comparison between the AI ethics policies identified by the policy experts in our study and those emphasized in existing literature, identifying both commonalities and areas of divergence. 
    more » « less
  2. Scholars and public figures have called for improved ethics and social responsibility education in computer science degree programs in order to better address consequential technological issues in society. Indeed, rising public concern about computing technologies arguably represents an existential threat to the credibility of the computing profession itself. Despite these increasing calls, relatively little is known about the ethical development and beliefs of computer science students, especially compared to other science and engineering students. Gaps in scholarly research make it difficult to effectively design and evaluate ethics education interventions in computer science. Therefore, there is a pressing need for additional empirical study regarding the development of ethical attitudes in computer science students. Influenced by the Professional Social Responsibility Development Model, this study explores personal and professional social responsibility attitudes among undergraduate computing students. Using survey results from a sample of 982 students (including 184 computing majors) who graduated from a large engineering institution between 2017 and 2021, we compare social responsibility attitudes cross-sectionally among computer science students, engineering students, other STEM students, and non-STEM students. Study findings indicate computer science students have statistically significantly lower social responsibility attitudes than their peers in other science and engineering disciplines. In light of growing ethical concerns about the computing profession, this study provides evidence about extant challenges in computing education and buttresses calls for more effective development of social responsibility in computing students. We discuss implications for undergraduate computing programs, ethics education, and opportunities for future research. 
    more » « less
  3. Abstract Designing effective and inclusive governance and public communication strategies for artificial intelligence (AI) requires understanding how stakeholders reason about its use and governance. We examine underlying factors and mechanisms that drive attitudes toward the use and governance of AI across six policy-relevant applications using structural equation modeling and surveys of both US adults (N = 3,524) and technology workers enrolled in an online computer science master’s degree program (N = 425). We find that the cultural values of individualism, egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes. Perceived benefit drives attitudes toward AI use but not its governance. Experts hold more nuanced views than the public and are more supportive of AI use but not its regulation. Drawing on these findings, we discuss challenges and opportunities for participatory AI governance, and we recommend that trustworthy AI governance be emphasized as strongly as trustworthy AI. 
    more » « less
  4. Maslej, Nestor; Fattorini, Loredana; Perrault, Raymond; Gil, Yolanda; Parli, Vanessa; Kariuki, Njenga; Capstick, Emily; Reuel, Anka; Brynjolfsson, Erik; Etchemendy, John (Ed.)
    AI has entered the public consciousness through generative AI’s impact on work—enhancing efficiency and automating tasks—but it has also driven innovation in education and personalized learning. Still, while AI promises benefits, it also poses risks—from hallucinating false outputs to reinforcing biases and diminishing critical thinking. With the AI education market expected to grow substantially, ethical concerns about the technology’s misuse—AI tools have already falsely accused marginalized students of cheating—are mounting, highlighting the need for responsible creation and deployment. Addressing these challenges requires both technical literacy and critical engagement with AI’s societal impact. Expanding AI expertise must begin in K–12 and higher education in order to ensure that students are prepared to be responsible users and developers. AI education cannot exist in isolation—it must align with broader computer science (CS) education efforts. This chapter examines the global state of AI and CS education, access disparities, and policies shaping AI’s role in learning. This chapter was a collaboration prepared by the Kapor Foundation, CSTA, PIT-UN and the AI Index. The Kapor Foundation works at the intersection of racial equity and technology to build equitable and inclusive computing education pathways, advance tech policies that mitigate harms and promote equitable opportunity, and deploy capital to support responsible, ethical, and equitable tech solutions. The CSTA is a global membership organization that unites, supports, and empowers educators to enhance the quality, accessibility, and inclusivity of computer science education. The Public Interest Technology University Network (PIT-UN) fosters collaboration between universities and colleges to build the PIT field and nurture a new generation of civic-minded technologists. 
    more » « less
  5. Although development of Artificial Intelligence (AI) technologies has been underway for decades, the acceleration of AI capabilities and rapid expansion of user access in the past few years has elicited public excitement as well as alarm. Leaders in government and academia, as well as members of the public, are recognizing the critical need for the ethical production and management of AI. As a result, society is placing immense trust in engineering undergraduate and graduate programs to train future developers of AI in their ethical and public welfare responsibilities. In this paper, we investigate whether engineering master’s students believe they receive the training they need from their educational curricula to negotiate this complex ethical landscape. The goal of the broader project is to understand how engineering students become public welfare “watchdogs”; i.e., how they learn to recognize and respond to their public welfare responsibilities. As part of this project, we conducted in-depth interviews with 62 electrical and computer engineering master’s students at a large public university about their educational experiences and understanding of engineers’ professional responsibilities, including those related specifically to AI technologies. This paper asks, (1) do engineering master’s students see potential dangers of AI related to how the technologies are developed, used, or possibly misused? (2) Do they feel equipped to handle the challenges of these technologies and respond ethically when faced with difficult situations? (3) Do they hold their engineering educators accountable for training them in ethical concerns around AI? We find that although some engineering master’s students see exciting possibilities of AI, most are deeply concerned about the ethical and public welfare issues that accompany its advancement and deployment. While some students feel equipped to handle these challenges, the majority feel unprepared to manage these complex situations in their professional work. Additionally, students reported that the ethical development and application of technologies like AI is often not included in curricula or are viewed as “soft skills” that are not as important as “technical” knowledge. Although some students we interviewed shared the sense of apathy toward these topics that they see from their engineering program, most were eager to receive more training in AI ethics. These results underscore the pressing need for engineering education programs, including graduate programs, to integrate comprehensive ethics, public responsibility, and whistleblower training within their curricula to ensure that the engineers of tomorrow are well-equipped to address the novel ethical dilemmas of AI that are likely to arise in the coming years. 
    more » « less