skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: XAI to Increase the Effectiveness of an Intelligent Pedagogical Agent.
We explore eXplainable AI (XAI) to enhance user experience and understand the value of explanations in AI-driven pedagogical decisions within an Intelligent Pedagogical Agent (IPA). Our real-time and personalized explanations cater to students' attitudes to promote learning. In our empirical study, we evaluate the effectiveness of personalized explanations by comparing three versions of the IPA: (1) personalized explanations and suggestions, (2) suggestions but no explanations, and (3) no suggestions. Our results show the IPA with personalized explanations significantly improves students' learning outcomes compared to the other versions.  more » « less
Award ID(s):
2013502
PAR ID:
10525825
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Format(s):
Medium: X
Location:
In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents (IVA 2023)
Sponsoring Org:
National Science Foundation
More Like this
  1. Novice programmers need to write basic code as part of the learning process, but they often face difficulties. To assist struggling students, we recently implemented personalized Parsons problems, which are code puzzles where students arrange blocks of code to solve them, as pop-up scaffolding. Students found them to be more engaging and preferred them for learning, instead of simply receiving the correct answer, such as the response they might get from generative AI tools like ChatGPT. However, a drawback of using Parsons problems as scaffolding is that students may be able to put the code blocks in the correct order without fully understanding the rationale of the correct solution. As a result, the learning benefits of scaffolding are compromised. Can we improve the understanding of personalized Parsons scaffolding by providing textual code explanations? In this poster, we propose a design that incorporates multiple levels of textual explanations for the Parsons problems. This design will be used for future technical evaluations and classroom experiments. These experiments will explore the effectiveness of adding textual explanations to Parsons problems to improve instructional benefits. 
    more » « less
  2. Natural language processing (NLP) tools can score students’ written explanations, opening new opportunities for science education. Optimally, these scores offer designers opportunities to align guidance with tested pedagogical frameworks and to investigate alternative ways to personalize instruction. We report on research, informed by the knowledge integration (KI) pedagogical framework, using online authorable and customizable environments (ACEs), to promote a deep understanding of complex scientific topics. We study how to personalize guidance to enable students to make productive revisions to written explanations during instruction, where they conduct investigations with models, simulations, hands-on activities, and other materials. We describe how we iteratively refined our assessments and guidance to support students in revising their scientific explanations. We report on recent investigations of hybrid models of personalized guidance that combine NLP scoring with opportunities for teachers to continue the conversation. 
    more » « less
  3. We present the results of a study where we provided students with textual explanations for learning content recommendations along with adaptive navigational support, in the context of a personalized system for practicing Java programming. We evaluated how varying the modality of access (no access vs. on-mouseover vs. on-click) can influence how students interact with the learning platform and work with both recommended and non-recommended content. We found that the persistence of students when solving recommended coding problems is correlated with their learning gain and that specific student-engagement metrics can be supported by the design of adequate navigational support and access to recommendations' explanations. 
    more » « less
  4. As artificial intelligence (AI) technology becomes increasingly pervasive, it is critical that students recognize AI and how it can be used. There is little research exploring learning capabilities of elementary students and the pedagogical supports necessary to facilitate students’ learning. PrimaryAI was created as a 3rd-5th grade AI curriculum that utilizes problem-based and immersive learning within an authentic life science context through four units that cover machine learning, computer vision, AI planning, and AI ethics. The curriculum was implemented by two upper elementary teachers during Spring 2022. Based on pre-test/post-test results, students were able to conceptualize AI concepts related to machine learning and computer vision. Results showed no significant differences based on gender. Teachers indicated the curriculum engaged students and provided teachers with sufficient scaffolding to teach the content in their classrooms. Recommendations for future implementations include greater alignment between the AI and life science concepts, alterations to the immersive problem-based learning environment, and enhanced connections to local animal populations. 
    more » « less
  5. Assessing student responses is a critical task in adaptive educational systems. More specifically, automatically evaluating students' self-explanations contributes to understanding their knowledge state which is needed for personalized instruction, the crux of adaptive educational systems. To facilitate the development of Artificial Intelligence (AI) and Machine Learning models for automated assessment of learners' self-explanations, annotated datasets are essential. In response to this need, we developed the SelfCode2.0 corpus, which consists of 3,019 pairs of student and expert explanations of Java code snippets, each annotated with semantic similarity, correctness, and completeness scores provided by experts. Alongside the dataset, we also provide performance results obtained with several baseline models based on TF-IDF and Sentence-BERT vectorial representations. This work aims to enhance the effectiveness of automated assessment tools in programming education and contribute to a better understanding and supporting student learning of programming. 
    more » « less