skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 30, 2026

Title: The Responsible Development of Automated Student Feedback with Generative AI
Providing rich, constructive feedback to students is essential for supporting and enhancing their learning. Recent advancements in Generative Artificial Intelligence (AI), particularly with large language models (LLMs), present new opportunities to deliver scalable, repeatable, and instant feedback, effectively making abundant a resource that has historically been scarce and costly. From a technical perspective, this approach is now feasible due to breakthroughs in AI and Natural Language Processing (NLP). While the potential educational benefits are compelling, implementing these technologies also introduces a host of ethical considerations that must be thoughtfully addressed. One of the core advantages of AI systems is their ability to automate routine and mundane tasks, potentially freeing up human educators for more nuanced work. However, the ease of automation risks a “tyranny of the majority”, where the diverse needs of minority or unique learners are overlooked, as they may be harder to systematize and less straightforward to accommodate. Ensuring inclusivity and equity in AI-generated feedback, therefore, becomes a critical aspect of responsible AI implementation in education. The process of developing machine learning models that produce valuable, personalized, and authentic feedback also requires significant input from human domain experts. Decisions around whose expertise is incorporated, how it is captured, and when it is applied have profound implications for the relevance and quality of the resulting feedback. Additionally, the maintenance and continuous refinement of these models are necessary to adapt feedback to evolving contextual, theoretical, and student-related factors. Without ongoing adaptation, feedback risks becoming obsolete or mismatched with the current needs of diverse student populations. Addressing these challenges is essential not only for ethical integrity but also for building the operational trust needed to integrate AI-driven systems as valuable tools in contemporary education. Thoughtful planning and deliberate choices are needed to ensure that these solutions truly benefit all students, allowing AI to support an inclusive and dynamic learning environment.  more » « less
Award ID(s):
2319137 1954556
PAR ID:
10589950
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE EDUCON 2025
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract As generative artificial intelligence (AI) becomes increasingly integrated into society and education, more institutions are implementing AI usage policies and offering introductory AI courses. These courses, however, should not replicate the technical focus typically found in introductory computer science (CS) courses like CS1 and CS2. In this paper, we use an adjustable, interdisciplinary socio‐technical AI literacy framework to design and present an introductory AI literacy course. We present a refined version of this framework informed by the teaching of a 1‐credit general education AI literacy course (primarily for freshmen and first‐year students from various majors), a 3‐credit course for CS majors at all levels, and a summer camp for high school students. Drawing from these teaching experiences and the evolving research landscape, we propose an introductory AI literacy course design framework structured around four cross‐cutting pillars. These pillars encompass (1) understanding the scope and technical dimensions of AI technologies, (2) learning how to interact with (generative) AI technologies, (3) applying principles of critical, ethical, and responsible AI usage, and (4) analyzing implications of AI on society. We posit that achieving AI literacy is essential for all students, those pursuing AI‐related careers, and those following other educational or professional paths. This introductory course, positioned at the beginning of a program, creates a foundation for ongoing and advanced AI education. The course design approach is presented as a series of modules and subtopics under each pillar. We emphasize the importance of thoughtful instructional design, including pedagogy, expected learning outcomes, and assessment strategies. This approach not only integrates social and technical learning but also democratizes AI education across diverse student populations and equips all learners with the socio‐technical, multidisciplinary perspectives necessary to navigate and shape the ethical future of AI. 
    more » « less
  2. Simulation-based learning has become a cornerstone of healthcare education, fostering essential skills like communication, teamwork or decision-making in safe, controlled environments. However, participants’ reflection on simulations often rely on subjective recollections, limiting their effectiveness in promoting learning. This symposium explores how multimodal analytics and AI can enhance simulation-based education by automating teamwork analysis data, providing structured feedback, and supporting reflective practices. The papers examine real-time analytics for closed-loop communication in cardiac arrest simulations, multimodal data use to refine feedback in ICU nursing simulations, generative AI-powered chatbots facilitating nursing students' interpretation of multimodal learning analytics dashboards, and culturally sensitive, AI-based scenarios for Breaking Bad News in an Indian context. Collectively, these contributions highlight the transformative potential of using data and AI-enhanced solutions, emphasizing personalization, cultural sensitivity, and human-centered design, and invite dialogue on the pedagogical, technological and ethical implications of introducing data-based practices and AI-based tools in medical education. 
    more » « less
  3. Generative artificial intelligence has become prevalent in discussions of educational technology, particularly in the context of mathematics education. These AI models can engage in human‐like conversation and generate answers to complex questions in real‐time, with education reports accentuating their potential to make teachers' work more efficient and improve student learning. This paper provides a review of the current literature on generative AI in mathematics education, focusing on four areas: generative AI for mathematics problem‐solving, generative AI for mathematics tutoring and feedback, generative AI to adapt mathematical tasks, and generative AI to assist mathematics teachers in planning. The paper discusses ethical and logistical issues that arise with the application of generative AI in mathematics education, and closes with some observations, recommendations, and future directions. 
    more » « less
  4. Generative Artificial Intelligence has become prevalent in discussions of educational technology. These AI models can engage in human-like conversation and generate answers to complex questions in real-time, with education reports accentuating their potential to make teachers’ work more efficient and improve student learning. In this paper, I provide a review of the current literature on generative AI in mathematics education, focusing on four areas: generative AI for mathematics problem-solving, generative AI for mathematics tutoring and feedback, generative AI to adapt mathematical tasks, and generative AI to assist mathematics teachers in planning. I then discuss ethical and logistical issues that arise with the application of generative AI in mathematics education, and close with some observations, recommendations, and future directions for the field. 
    more » « less
  5. While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement. 
    more » « less