Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Mills, Caitlin; Alexandron, Giora; Taibi, Davide; Lo_Bosco, Giosuè; Paquette, Luc (Ed.)There is a growing community of researchers at the intersection- tion of data mining, AI, and computing education research. The objective of the CSEDM workshop is to facilitate a dis- Discussion among this research community, with a focus on how data mining can be uniquely applied in computing ed- ucation research. For example, what new techniques are needed to analyze program code and CS log data? How do results from CS education inform our analysis of this data? The workshop is meant to be an interdisciplinary event at the intersection of EDM and Computing Education Research. Researchers, faculty, and students are encouraged to share their AI- and data-driven approaches, methodological- gies, and experiences where data transforms how students learn Computer Science (CS) skills. This full-day workshop will feature paper presentations and discussions to promote collaboration.more » « lessFree, publicly-accessible full text available July 20, 2026
-
Understanding student practice behavior and its connection to their learning is essential for effective recommender systems that provide personalized learning support. In this study, we apply a sequential pattern mining approach to analyze student practice behavior in a practice system for introductory Python programming. Our goal is to identify different types of practice behavior and connect them to student performance. We examine two types of practice sequences: (1) by login session and (2) by learning topic. For each sequence type, we use SPAM (Sequential PAttern Mining) to identify the most frequent micro-patterns and build behavior profiles of individual learners as vectors of micro-pattern frequencies observed in their behavior. We confirm that these vectors are stable for both sequence types (p < 0.03 for session sequences and p < 0.003 for topic sequences). Using the vectors, we perform K-means clustering where we identify two practice behaviors: example explorers and persistent finishers. We repeat this experiment using different coding approaches for student sequences and obtain similar clusters. Our results suggest that example explorers and persistent finishers might represent two typical types of divergent student behaviors in a programming practice system. Finally, to better understand the relationship between students' background knowledge, learning outcomes, and practice behavior, we perform statistical analyses to assess the significance of the associations among pre-test scores, cluster assignments, and final course grades.more » « lessFree, publicly-accessible full text available July 20, 2026
-
Knowledge tracing is a method to model students’ knowledge and enable personalized education in many STEM disciplines such as mathematics and physics, but has so far still been a challenging task in computing disciplines. One key obstacle to successful knowledge tracing in computing education lies in the accurate extraction of knowledge components (KCs), since multiple intertwined KCs are practiced at the same time for programming problems. In this paper, we address the limitations of current methods and explore a hybrid approach for KC extraction, which combines automated code parsing with an expert-built ontology. We use an introductory (CS1) Java benchmark dataset to compare its KC extraction performance with the traditional extraction methods using a state-of-the-art evaluation approach based on learning curves. Our preliminary results show considerable improvement over traditional methods of student modeling. The results indicate the opportunity to improve automated KC extraction in CS education by incorporating expert knowledge into the process.more » « lessFree, publicly-accessible full text available June 13, 2026
-
Assessing student responses is a critical task in adaptive educational systems. More specifically, automatically evaluating students' self-explanations contributes to understanding their knowledge state which is needed for personalized instruction, the crux of adaptive educational systems. To facilitate the development of Artificial Intelligence (AI) and Machine Learning models for automated assessment of learners' self-explanations, annotated datasets are essential. In response to this need, we developed the SelfCode2.0 corpus, which consists of 3,019 pairs of student and expert explanations of Java code snippets, each annotated with semantic similarity, correctness, and completeness scores provided by experts. Alongside the dataset, we also provide performance results obtained with several baseline models based on TF-IDF and Sentence-BERT vectorial representations. This work aims to enhance the effectiveness of automated assessment tools in programming education and contribute to a better understanding and supporting student learning of programming.more » « lessFree, publicly-accessible full text available May 14, 2026
-
Code completion problems are an effective type of formative assessment; especially, when used to practice newly learned concepts or topics. While there is a growing body of research in computing education on the use of large language models (LLMs) to support learning content development, the use of LLMs for producing high-quality code completion problems has not yet been explored. In this paper, we analyze the capability of LLMs to generate effective distractors (i.e., plausible but incorrect options) and explanations for completion problems. We utilize common student misconceptions to improve the quality of the generated distractors. Our study suggests that LLMs are capable of generating reasonable distractors and explanations. At the same time, we identify a lack of a sufficiently granular taxonomy of common student misconceptions that would be needed for aligning the generated distractors with the common misconceptions and errors -- a gap that should be addressed in future work.more » « lessFree, publicly-accessible full text available May 14, 2026
-
As large language models (LLMs) show great promise in generating a wide spectrum of educational materials, robust yet cost-effective assessment of the quality and effectiveness of such materials becomes an important challenge. Traditional approaches, including expert-based quality assessment and student-centered evaluation, are resource-consuming, and do not scale efficiently. In this work, we explored the use of pre-existing student learning data as a promising approach to evaluate LLM-generated learning materials. Specifically, we used a dataset where students were completing the program construction challenges by picking the correct answers among human-authored distractors to evaluate the quality of LLM-generated distractors for the same challenges. The dataset included responses from 1,071 students across 22 classes taught from Fall 2017 to Spring 2023. We evaluated five prominent LLMs (OpenAI-o1, GPT-4, GPT-4o, GPT-4o-mini, and Llama-3.1-8b) across three different prompts to see which combinations result in more effective distractors, i.e., those that are plausible (often picked by students), and potentially based on common misconceptions. Our results suggest that GPT-4o was the most effective model, matching close to 50% of the functional distractors originally authored by humans. At the same time, all of the evaluated LLMs generated many novel distractors, i.e., those that did not match the pre-existing human-authored ones. Our preliminary analysis shows that those appear to be promising. Establishing their effectiveness in real-world classroom settings is left for future work.more » « lessFree, publicly-accessible full text available March 3, 2026
-
Novice programmers can greatly improve their understanding of challenging programming concepts by studying worked examples that demonstrate the implementation of these concepts. Despite the extensive repositories of effective worked examples created by CS education experts, a key challenge remains: identifying the most relevant worked example for a given programming problem and the specific difficulties a student faces solving the problem. Previous studies have explored similar example recommendation approaches. Our research introduces a novel method by utilizing deep learning code representation models to generate code vectors, capturing both syntactic and semantic similarities among programming examples. Driven by the need to provide relevant and personalized examples to programming students, our approach emphasizes similarity assessment and clustering techniques to identify similar code problems, examples, and challenges. This method aims to deliver more accurate and contextually relevant recommendations based on individual learning needs. Providing tailored support to students in real-time facilitates better problem-solving strategies and enhances students' learning experiences, contributing to the advancement of programming education.more » « lessFree, publicly-accessible full text available February 12, 2026
-
From the early days of digital textbooks to the rapidly progressing age of Large Language Models, researchers from different areas explored multiple research directors on the crossroads of Textbooks, a traditional learning medium and Artificial Intelligence, a technology that could empower it. These research directions formed a new research area, often referred to as Intelligent Textbooks. The International Workshop on Intelligent Textbooks at AIED 2025, the sixth workshop in the series, aims to bring together researchers working on different aspects of Intelligent Textbooks to exchange complementary insights, review new results, and discuss emerging ideas.more » « lessFree, publicly-accessible full text available January 1, 2026
-
The ability to predict student performance in introductory programming courses is important to help struggling students and enhance their persistence. However, for this prediction to be impactful, it is crucial that it remains transparent and accessible for both instructors and students, ensuring effective utilization of the predicted results. Machine learning models with explainable features provide an effective means for students and instructors to comprehend students' diverse programming behaviors and problem-solving strategies, elucidating the factors contributing to both successful and suboptimal performance. This study develops an explainable model that predicts student performance based on programming assignment submission information in different stages of the course to enable early explainable predictions. We extract data-driven features from student programming submissions and utilize a stacked ensemble model for predicting final exam grades. The experimental results suggest that our model successfully predicts student performance based on their programming submissions earlier in the semester. Employing SHAP, a game-theory-based framework, we explain the model's predictions, aiding stakeholders in understanding the influence of diverse programming behaviors on students' success. Additionally, we analyze crucial features, employing a mix of descriptive statistics and mixture models to identify distinct student profiles based on their problem-solving patterns, enhancing overall explainability. Furthermore, we dive deeper and analyze the profiles using different programming patterns of the students to elucidate the characteristics of different students where SHAP explanations are not comprehensible. Our explainable early prediction model elucidates common problem-solving patterns in students relative to their expertise, facilitating effective intervention and adaptive support.more » « lessFree, publicly-accessible full text available November 29, 2025
-
Worked examples have consistently demonstrated their value in education, serving as the model solutions for solving specific problem types. Past studies indicate that combining worked examples with practice problems is more effective than providing either problems or examples in isolation. Despite these findings, the exploration of the effects of grouping worked examples and problems for programming practice is limited, especially in learning environments designed for practice. This paper compares two content organization approaches in a practice system. The first one is explicitly connecting worked examples and completion problems, allowing students to access them in smaller bundles. The other one is delivering the same set of activities separately but keeping an implicit connection by grouping them under a topic. We examined the effects of these two approaches on student engagement and performance in a semester-long classroom experiment conducted in a CS1 programming course. The results indicate that explicitly connecting worked examples and completion problems increased engagement with the completion problems and supported problem-solving performance by leading to higher success rates and persistence.more » « less
An official website of the United States government

Full Text Available