This work-in-progress paper explores university students’ perspectives on Generative Artificial Intelligence (GAI) tools, such as ChatGPT, an increasingly prominent topic in the academic community. There is ongoing debate about whether faculty should teach students how to use GAI tools, restrict their usage to maintain academic integrity, or establish regulatory guidelines for sustained integration into higher education. Unfortunately, limited research exists beyond surface-level policies and educator opinions regarding GAI, and its full impact on student learning remains largely unknown. Therefore, understanding students' perceptions and how they use GAI is crucial to ensuring its effective and ethical integration into higher education. As GAI continues to disrupt traditional educational paradigms, this study seeks to explore how students perceive its influence on their learning and problem-solving. As part of a larger mixed-methods study, this work-in-progress paper presents preliminary findings from the qualitative portion using a phenomenological approach that answers the research question: How do university students perceive disruptive technologies like ChatGPT affecting their education and learning? By exploring the implications of Artificial Intelligence (AI) tools on student learning, academic integrity, individual beliefs, and community norms, this study contributes to the broader discourse on the role of emerging technologies in shaping the future of teaching and learning in education.
more »
« less
“What Makes ChatGPT Dangerous is Also What Makes It Special”: High-School Student Perspectives on the Integration or Ban of Artificial Intelligence in Educational Contexts
The emergence of ChatGPT, an AI-powered language model, has sparked numerous debates and discussions. In educational research, scholars have raised significant questions regarding the potential, limitations, and ethical concerns around the use of this technology. While research on the application and implications of ChatGPT in academic settings exists, analysis of the perspectives of high-school students are limited. In this study, we use qualitative content analysis to explore the perspectives of high-school students regarding the integration or ban of ChatGPT in their schools through the lens of the Technology Acceptance Model (TAM2). Data was sourced from students’ comments to a New York Times Learning Network article. Findings revealed that students' perceptions about integrating or banning ChatGPT in schools are influenced by their assessments of the technology’s usefulness, personal experiences, societal technology trends, and ethical considerations. Our findings suggest that student perspectives in this study align with those of educators and policymakers while also possessing unique perspectives that cater to their specific needs and experiences. Implications emphasize the significance of an inclusive decision-making process around the integration of AI schools in educational contexts, including students alongside other stakeholders.
more »
« less
- Award ID(s):
- 2238712
- PAR ID:
- 10502898
- Publisher / Repository:
- The IJTE is affiliated with the International Society for Technology, Education, and Science (ISTES)
- Date Published:
- Journal Name:
- International Journal of Technology in Education
- Volume:
- 7
- Issue:
- 2
- ISSN:
- 2689-2758
- Page Range / eLocation ID:
- 174 to 199
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This exploratory study focuses on the use of ChatGPT, a generative artificial intelligence (GAI) tool, by undergraduate engineering students in lab report writing in the major. Literature addressing the impact of ChatGPT and AI on student writing suggests that such technologies can both support and limit students' composing and learning processes. Acknowledging the history of writing with technologies and writing as technology, the development of GAI warrants attention to pedagogical and ethical implications in writing-intensive engineering classes. This pilot study investigates how the use of ChatGPT impacts students’ lab writing outcomes in terms of rhetorical knowledge, critical thinking and composing, knowledge of conventions, and writing processes. A group of undergraduate volunteers (n= 7) used ChatGPT to revise their original engineering lab reports written without using ChatGPT. A comparative study was conducted between original lab report samples and revisions by directly assessing students’ lab reports in gateway engineering lab courses. A focus group was conducted to learn their experiences and perspectives on ChatGPT in the context of engineering lab report writing. Implementing ChatGPT in the revision writing process could result in improving engineering students’ lab report quality due to students’ enhanced lab report genre understanding. At the same time, the use of ChatGPT also leads students to provide false claims, incorrect lab procedures, or extremely broad statements, which are not valued in the engineering lab report genre.more » « less
-
Increased use of technology in schools raises new privacy and security challenges for K-12 students---and harms such as commercialization of student data, exposure of student data in security breaches, and expanded tracking of students---but the extent of these challenges is unclear. In this paper, first, we interviewed 18 school officials and IT personnel to understand what educational technologies districts use and how they manage student privacy and security around these technologies. Second, to determine if these educational technologies are frequently endorsed across United States (US) public schools, we compiled a list of linked educational technology websites scraped from 15,573 K-12 public school/district domains and analyzed them for privacy risks. Our findings suggest that administrators lack resources to properly assess privacy and security issues around educational technologies even though they do pose potential privacy issues. Based on these findings, we make recommendations for policymakers, educators, and the CHI research community.more » « less
-
Abstract As generative artificial intelligence (AI) becomes increasingly integrated into society and education, more institutions are implementing AI usage policies and offering introductory AI courses. These courses, however, should not replicate the technical focus typically found in introductory computer science (CS) courses like CS1 and CS2. In this paper, we use an adjustable, interdisciplinary socio‐technical AI literacy framework to design and present an introductory AI literacy course. We present a refined version of this framework informed by the teaching of a 1‐credit general education AI literacy course (primarily for freshmen and first‐year students from various majors), a 3‐credit course for CS majors at all levels, and a summer camp for high school students. Drawing from these teaching experiences and the evolving research landscape, we propose an introductory AI literacy course design framework structured around four cross‐cutting pillars. These pillars encompass (1) understanding the scope and technical dimensions of AI technologies, (2) learning how to interact with (generative) AI technologies, (3) applying principles of critical, ethical, and responsible AI usage, and (4) analyzing implications of AI on society. We posit that achieving AI literacy is essential for all students, those pursuing AI‐related careers, and those following other educational or professional paths. This introductory course, positioned at the beginning of a program, creates a foundation for ongoing and advanced AI education. The course design approach is presented as a series of modules and subtopics under each pillar. We emphasize the importance of thoughtful instructional design, including pedagogy, expected learning outcomes, and assessment strategies. This approach not only integrates social and technical learning but also democratizes AI education across diverse student populations and equips all learners with the socio‐technical, multidisciplinary perspectives necessary to navigate and shape the ethical future of AI.more » « less
-
The introduction of generative artificial intelligence (GenAI) has been met with a mix of reactions by higher education institutions, ranging from consternation and resistance to wholehearted acceptance. Previous work has looked at the discourse and policies adopted by universities across the U.S. as well as educators, along with the inclusion of GenAI-related content and topics in higher education. Building on previous research, this study reports findings from a survey of engineering educators on their use of and perspectives toward generative AI. Specifically, we surveyed 98 educators from engineering, computer science, and education who participated in a workshop on GenAI in Engineering Education to learn about their perspectives on using these tools for teaching and research. We asked them about their use of and comfort with GenAI, their overall perspectives on GenAI, the challenges and potential harms of using it for teaching, learning, and research, and examined whether their approach to using and integrating GenAI in their classroom influenced their experiences with GenAI and perceptions of it. Consistent with other research in GenAI education, we found that while the majority of participants were somewhat familiar with GenAI, reported use varied considerably. We found that educators harbored mostly hopeful and positive views about the potential of GenAI. We also found that those who engaged more with their students on the topic of GenAI, both as communicators (those who spoke directly with their students) and as incorporators (those who included it in their syllabus), tend to be more positive about its contribution to learning, while also being more attuned to its potential abuses. These findings suggest that integrating and engaging with generative AI is essential to foster productive interactions between instructors and students around this technology. Our work ultimately contributes to the evolving discourse on GenAI use, integration, and avoidance within educational settings. Through exploratory quantitative research, we have identified specific areas for further investigation.more » « less
An official website of the United States government

