The recent surge in artificial intelligence (AI) developments has been met with an increase in attention towards incorporating ethical engagement in machine learning discourse and development. This attention is noticeable within engineering education, where comprehensive ethics curricula are typically absent in engineering programs that train future engineers to develop AI technologies [1]. Artificial intelligence technologies operate as black boxes, presenting both developers and users with a certain level of obscurity concerning their decision-making processes and a diminished potential for negotiating with its outputs [2]. The implementation of collaborative and reflective learning has the potential to engage students with facets of ethical awareness that go along with algorithmic decision making – such as bias, security, transparency and other ethical and moral dilemmas. However, there are few studies that examine how students learn AI ethics in electrical and computer engineering courses. This paper explores the integration of STEMtelling, a pedagogical storytelling method/sensibility, into an undergraduate machine learning course. STEMtelling is a novel approach that invites participants (STEMtellers) to center their own interests and experiences through writing and sharing engineering stories (STEMtells) that are connected to course objectives. Employing a case study approach grounded in activity theory, we explore how students learn ethical awareness that is intrinsic to being an engineer. During the STEMtelling process, STEMtellers blur the boundaries between social and technical knowledge to place themselves at the center of knowledge production. In this WIP, we discuss algorithmic awareness, as one of the themes identified as a practice in developing ethical awareness of AI through STEMtelling. Findings from this study will be incorporated into the development of STEMtelling and address challenges of integrating ethics and the social perception of AI and machine learning courses.
more »
« less
Cultivating Ethical Engineers in the Age of AI and Robotics: An Educational Cultures Perspective
This paper considers the cultivation of ethical identities among future engineers and computer scientists, particularly those whose professional practice will extensively intersect with emerging technologies enabled by artificial intelligence (AI). Many current engineering and computer science students will go on to participate in the development and refinement of AI, machine learning, robotics, and related technologies, thereby helping to shape the future directions of these applications. Researchers have demonstrated the actual and potential deleterious effects that these technologies can have on individuals and communities. Together, these trends present a timely opportunity to steer AI and robotic design in directions that confront, or at least do not extend, patterns of discrimination, marginalization, and exclusion. Examining ethics interventions in AI and robotics education may yield insights into challenges and opportunities for cultivating ethical engineers. We present our ongoing research on engineering ethics education, examine how our work is situated with respect to current AI and robotics applications, and discuss a curricular module in “Robot Ethics” that was designed to achieve interdisciplinary learning objectives. Finally, we offer recommendations for more effective engineering ethics education, with a specific focus on emerging technologies.
more »
« less
- Award ID(s):
- 1909847
- PAR ID:
- 10312683
- Date Published:
- Journal Name:
- IEEE International Symposium on Technology and Society
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Engineering education researchers have identified a lack of alignment between the complexities of professional engineering contexts and the ways that we train and evaluate the ethical abilities and dispositions of engineers preparing for professional practice. The challenges that engineers face as practitioners are multifaceted, wicked problems situated in unique and varied disciplinary and industry contexts. Understanding the variations in ways of experiencing ethics by practicing engineers in these complex professional contexts will support a better alignment between engineering ethics instruction and what students might experience in professional practice. While there is a need for richer and more contextually-specific ethics training for many areas, our initial focus is the healthcare products industry. Thus, our NSF-funded CCE STEM project will enable us to analyze the alignment of relationships among frameworks for ethics education in engineering and the reality of engineering practice within the health products industry. As a first phase, the project has focused on understanding the different ways in which practicing engineers experience ethical issues in the health products industry using phenomenography, an empirical research methodology for investigating qualitatively different ways people experience a phenomenon. In the second phase, we have analyzed critical incidents that potentially cause the variation in experiencing ethics in practice. The findings of these studies are anticipated to serve as a guidepost for aligning educational strategies and developing effective training for future ethical practitioners. In our paper, we present an overview of the study (background and methods), progress to date, and how we expect the results to inform engineering ethics education and industry ethics training.more » « less
-
Although development of Artificial Intelligence (AI) technologies has been underway for decades, the acceleration of AI capabilities and rapid expansion of user access in the past few years has elicited public excitement as well as alarm. Leaders in government and academia, as well as members of the public, are recognizing the critical need for the ethical production and management of AI. As a result, society is placing immense trust in engineering undergraduate and graduate programs to train future developers of AI in their ethical and public welfare responsibilities. In this paper, we investigate whether engineering master’s students believe they receive the training they need from their educational curricula to negotiate this complex ethical landscape. The goal of the broader project is to understand how engineering students become public welfare “watchdogs”; i.e., how they learn to recognize and respond to their public welfare responsibilities. As part of this project, we conducted in-depth interviews with 62 electrical and computer engineering master’s students at a large public university about their educational experiences and understanding of engineers’ professional responsibilities, including those related specifically to AI technologies. This paper asks, (1) do engineering master’s students see potential dangers of AI related to how the technologies are developed, used, or possibly misused? (2) Do they feel equipped to handle the challenges of these technologies and respond ethically when faced with difficult situations? (3) Do they hold their engineering educators accountable for training them in ethical concerns around AI? We find that although some engineering master’s students see exciting possibilities of AI, most are deeply concerned about the ethical and public welfare issues that accompany its advancement and deployment. While some students feel equipped to handle these challenges, the majority feel unprepared to manage these complex situations in their professional work. Additionally, students reported that the ethical development and application of technologies like AI is often not included in curricula or are viewed as “soft skills” that are not as important as “technical” knowledge. Although some students we interviewed shared the sense of apathy toward these topics that they see from their engineering program, most were eager to receive more training in AI ethics. These results underscore the pressing need for engineering education programs, including graduate programs, to integrate comprehensive ethics, public responsibility, and whistleblower training within their curricula to ensure that the engineers of tomorrow are well-equipped to address the novel ethical dilemmas of AI that are likely to arise in the coming years.more » « less
-
As artificial intelligence (AI) rapidly evolves, its integration into civil engineering presents both significant opportunities and challenges. Through a qualitative analysis of interview, survey, and reflection journal data, this study explores the perspectives of early-career civil engineers regarding the current and potential roles of AI in engineering practice. While AI is seen as a valuable tool for automating routine tasks and enhancing efficiency, concerns persist about its reliability, ethical implications, and potential overreliance. Participants emphasized the importance of maintaining human oversight, with AI serving as an aid rather than a replacement for engineering judgment. The study identifies key competencies essential for engineers to effectively and ethically integrate AI, including AI literacy, critical thinking, ethics, and cybersecurity awareness. As AI continues to influence the field, it is crucial to equip engineers with these competencies through education and ongoing professional development. The paper offers recommendations for integrating responsible AI practices into engineering education and the workplace, highlighting the need for continuous training in both technical skills and ethical decision-making. This research contributes to the growing literature on responsible AI integration, providing insights that can guide the future workforce in navigating the complexities of AI-enhanced engineering practices.more » « less
-
This paper describes the motivations and some directions for bringing insights and methods from moral and cultural psychology to bear on how engineering ethics is conceived, taught, and assessed. Therefore, the audience for this paper is not only engineering ethics educators and researchers but also administrators and organizations concerned with ethical behaviors. Engineering ethics has typically been conceived and taught as a branch of professional and applied ethics with pedagogical aims, where students and practitioners learn about professional codes and/or Western ethical theories and then apply these resources to address issues presented in case studies about engineering and/or technology. As a result, accreditation and professional bodies have generally adopted ethical reasoning skills and/or moral knowledge as learning outcomes. However, this paper argues that such frameworks are psychologically “irrealist” and culturally biased: it is not clear that ethical judgments or behaviors are primarily the result of applying principles, or that ethical concerns captured in professional codes or Western ethical theories do or should reflect the engineering ethical concerns of global populations. Individuals from Western educated industrialized rich democratic cultures are outliers on various psychological and social constructs, including self-concepts, thought styles, and ethical concerns. However, engineering is more cross cultural and international than ever before, with engineers and technologies spanning multiple cultures and countries. For instance, different national regulations and cultural values can come into conflict while performing engineering work. Additionally, ethical judgments may also result from intuitions, closer to emotions than reflective thought, and behaviors can be affected by unconscious, social, and environmental factors. To address these issues, this paper surveys work in engineering ethics education and assessment to date, shortcomings within these approaches, and how insights and methods from moral and cultural psychology could be used to improve engineering ethics education and assessment, making them more culturally responsive and psychologically realist at the same time.more » « less
An official website of the United States government

