Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 6, 2026
-
Free, publicly-accessible full text available August 1, 2025
-
Kochmar, E; Bexte, M; Burstein, J; Horbach, A; Laarmann-Quante, R; Tack, A; Yaneva, V; Yuan, Z (Ed.)The practice of soliciting self-explanations from students is widely recognized for its pedagogical benefits. However, the labor-intensive effort required to manually assess students’ explanations makes it impractical for classroom settings. As a result, many current solutions to gauge students’ understanding during class are often limited to multiple choice or fill-in-the-blank questions, which are less effective at exposing misconceptions or helping students to understand and integrate new concepts. Recent advances in large language models (LLMs) present an opportunity to assess student explanations in real-time, making explanation-based classroom response systems feasible for implementation. In this work, we investigate LLM-based approaches for assessing the correctness of students’ explanations in response to undergraduate computer science questions. We investigate alternative prompting approaches for multiple LLMs (i.e., Llama 2, GPT-3.5, and GPT-4) and compare their performance to FLAN-T5 models trained in a fine-tuning manner. The results suggest that the highest accuracy and weighted F1 score were achieved by fine-tuning FLAN-T5, while an in-context learning approach with GPT-4 attains the highest macro F1 score.more » « less
-
Kochmar, E; Bexte, M; Burstein, J; Horbach, A; Laarmann-Quante, R; Tack, A; Yaneva, V; Yuan, Z (Ed.)
-
Free, publicly-accessible full text available July 22, 2025
-
Corlu, CG; Hunter, SR; Lam, H; Onggo, BS; Shortle, J; Biller, B. (Ed.)Calibration is a crucial step for model validity, yet its representation is often disregarded. This paper proposes a two-stage approach to calibrate a model that represents target data by identifying multiple diverse parameter sets while remaining computationally efficient. The first stage employs a black-box optimization algorithm to generate near-optimal parameter sets, the second stage clusters the generated parameter sets. Five black-box optimization algorithms, namely, Latin Hypercube Sampling (LHS), Sequential Model-based Algorithm Configuration (SMAC), Optuna, Simulated Annealing (SA), and Genetic Algorithm (GA), are tested and compared using a disease-opinion compartmental model with predicted health outcomes. Results show that LHS and Optuna allow more exploration and capture more variety in possible future health outcomes. SMAC, SA, and GA, are better at finding the best parameter set but their sampling approach generates less diverse model outcomes. This two-stage approach can reduce computation time while producing robust and representative calibration.more » « less
-
Successful problem-based learning (PBL) often requires students to collectively regulate their learning processes as a group and engage in socially shared regulation of learning (SSRL). This paper focuses on how facilitators supported SSRL in the context of middle-school game-based PBL. Using conversation analysis, this study analyzed text-based chat messages of facilitators and students collected during gameplay. The analysis revealed direct modeling strategies such as performing regulative processes, promoting group awareness, and dealing with contingency as well as indirect strategies including prompting questions and acknowledgment of regulation, and the patterns of how facilitation faded to yield responsibilities to students to regulate their own learning. The findings will inform researchers and practitioners to design prompts and develop technological tools such as adaptive scaffolding to support SSRL in PBL or other collaborative inquiry processes.more » « less