Abstract The explosive growth of artificial intelligence (AI) over the past few years has focused attention on how diverse stakeholders regulate these technologies to ensure their safe and ethical use. Increasingly, governmental bodies, corporations, and nonprofit organizations are developing strategies and policies for AI governance. While existing literature on ethical AI has focused on the various principles and guidelines that have emerged as a result of these efforts, just how these principles are operationalized and translated to broader policy is still the subject of current research. Specifically, there is a gap in our understanding of how policy practitioners actively engage with, contextualize, or reflect on existing AI ethics policies in their daily professional activities. The perspectives of these policy experts towards AI regulation generally are not fully understood. To this end, this paper explores the perceptions of scientists and engineers in policy-related roles in the US public and nonprofit sectors towards AI ethics policy, both in the US and abroad. We interviewed 15 policy experts and found that although these experts were generally familiar with AI governance efforts within their domains, overall knowledge of guiding frameworks and critical regulatory policies was still limited. There was also a general perception among the experts we interviewed that the US lagged behind other comparable countries in regulating AI, a finding that supports the conclusion of existing literature. Lastly, we conducted a preliminary comparison between the AI ethics policies identified by the policy experts in our study and those emphasized in existing literature, identifying both commonalities and areas of divergence.
more »
« less
Assessing computer science student attitudes towards AI ethics and policy
Abstract As artificial intelligence (AI) grows in popularity and importance—both as a domain within broader computing research and in society at large—increasing focus will need to be paid to the ethical governance of this emerging technology. The attitudes and competencies with respect to AI ethics and policy among post-secondary students studying computer science (CS) are of particular interest, as many of these students will go on to play key roles in the development and deployment of future AI innovations. Despite this population of computer scientists being at the forefront of learning about and using AI tools, their attitudes towards AI remain understudied in the literature. In an effort to begin to close this gap, in fall 2024 we fielded a survey ($$n=117$$) to undergraduate and graduate students enrolled in CS courses at a large public university in the United States to assess their attitudes towards the nascent fields of AI ethics and policy. Additionally, we conducted one-on-one follow-up interviews with 13 students to elicit more in-depth responses on topics such as the use of AI tools in the classroom, ethical impacts of AI, and government regulation of AI. In this paper, we describe the findings of our exploratory study, drawing parallels and contrasts to broader public opinion polling in the United States. We conclude by evaluating the implications of CS student attitudes on the future of AI education and governance.
more »
« less
- Award ID(s):
- 2418867
- PAR ID:
- 10632208
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- AI and Ethics
- ISSN:
- 2730-5953
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Social Responsibility Attitudes Among Undergraduate Computer Science Students: An Empirical AnalysisScholars and public figures have called for improved ethics and social responsibility education in computer science degree programs in order to better address consequential technological issues in society. Indeed, rising public concern about computing technologies arguably represents an existential threat to the credibility of the computing profession itself. Despite these increasing calls, relatively little is known about the ethical development and beliefs of computer science students, especially compared to other science and engineering students. Gaps in scholarly research make it difficult to effectively design and evaluate ethics education interventions in computer science. Therefore, there is a pressing need for additional empirical study regarding the development of ethical attitudes in computer science students. Influenced by the Professional Social Responsibility Development Model, this study explores personal and professional social responsibility attitudes among undergraduate computing students. Using survey results from a sample of 982 students (including 184 computing majors) who graduated from a large engineering institution between 2017 and 2021, we compare social responsibility attitudes cross-sectionally among computer science students, engineering students, other STEM students, and non-STEM students. Study findings indicate computer science students have statistically significantly lower social responsibility attitudes than their peers in other science and engineering disciplines. In light of growing ethical concerns about the computing profession, this study provides evidence about extant challenges in computing education and buttresses calls for more effective development of social responsibility in computing students. We discuss implications for undergraduate computing programs, ethics education, and opportunities for future research.more » « less
-
Abstract Designing effective and inclusive governance and public communication strategies for artificial intelligence (AI) requires understanding how stakeholders reason about its use and governance. We examine underlying factors and mechanisms that drive attitudes toward the use and governance of AI across six policy-relevant applications using structural equation modeling and surveys of both US adults (N = 3,524) and technology workers enrolled in an online computer science master’s degree program (N = 425). We find that the cultural values of individualism, egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes. Perceived benefit drives attitudes toward AI use but not its governance. Experts hold more nuanced views than the public and are more supportive of AI use but not its regulation. Drawing on these findings, we discuss challenges and opportunities for participatory AI governance, and we recommend that trustworthy AI governance be emphasized as strongly as trustworthy AI.more » « less
-
Although development of Artificial Intelligence (AI) technologies has been underway for decades, the acceleration of AI capabilities and rapid expansion of user access in the past few years has elicited public excitement as well as alarm. Leaders in government and academia, as well as members of the public, are recognizing the critical need for the ethical production and management of AI. As a result, society is placing immense trust in engineering undergraduate and graduate programs to train future developers of AI in their ethical and public welfare responsibilities. In this paper, we investigate whether engineering master’s students believe they receive the training they need from their educational curricula to negotiate this complex ethical landscape. The goal of the broader project is to understand how engineering students become public welfare “watchdogs”; i.e., how they learn to recognize and respond to their public welfare responsibilities. As part of this project, we conducted in-depth interviews with 62 electrical and computer engineering master’s students at a large public university about their educational experiences and understanding of engineers’ professional responsibilities, including those related specifically to AI technologies. This paper asks, (1) do engineering master’s students see potential dangers of AI related to how the technologies are developed, used, or possibly misused? (2) Do they feel equipped to handle the challenges of these technologies and respond ethically when faced with difficult situations? (3) Do they hold their engineering educators accountable for training them in ethical concerns around AI? We find that although some engineering master’s students see exciting possibilities of AI, most are deeply concerned about the ethical and public welfare issues that accompany its advancement and deployment. While some students feel equipped to handle these challenges, the majority feel unprepared to manage these complex situations in their professional work. Additionally, students reported that the ethical development and application of technologies like AI is often not included in curricula or are viewed as “soft skills” that are not as important as “technical” knowledge. Although some students we interviewed shared the sense of apathy toward these topics that they see from their engineering program, most were eager to receive more training in AI ethics. These results underscore the pressing need for engineering education programs, including graduate programs, to integrate comprehensive ethics, public responsibility, and whistleblower training within their curricula to ensure that the engineers of tomorrow are well-equipped to address the novel ethical dilemmas of AI that are likely to arise in the coming years.more » « less
-
null (Ed.)Artificial intelligence (AI) tools and technologies are increasingly prevalent in society. Many teens interact with AI devices on a daily basis but often have a limited understanding of how AI works, as well as how it impacts society more broadly. It is critical to develop youths’ understanding of AI, cultivate ethical awareness, and support diverse youth in pursuing computer science to help ensure future development of more equitable AI technologies. Here, we share our experiences developing and remotely facilitating an interdisciplinary AI ethics program for secondary students designed to increase teens’ awareness and understanding of AI and its societal impacts. Students discussed stories with embedded ethical dilemmas, engaged with AI media and simulations, and created digital products to express their stance on an AI ethics issue. Across four iterations in formal and informal settings, we found students to be engaged in AI stories and invested in learning about AI and its societal impacts. Short stories were effective in raising awareness, focusing discussion and supporting students in developing a more nuanced understanding of AI ethics issues, such as fairness, bias and privacy.more » « less
An official website of the United States government

