- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0000100001000000
- More
- Availability
-
11
- Author / Contributor
- Filter by Author / Creator
-
-
Carpenter, D (1)
-
Carpenter, Dan (1)
-
Lee, S (1)
-
Lee, Seung (1)
-
Lester, J (1)
-
Lester, James (1)
-
Min, W (1)
-
Min, Wookhee (1)
-
Ozogul, G (1)
-
Ozogul, Gamze (1)
-
Zheng, X (1)
-
Zheng, Xiaoying (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
- Filter by Editor
-
-
Bexte, M (1)
-
Burstein, J (1)
-
Horbach, A (1)
-
Kochmar, E (1)
-
Laarmann-Quante, R (1)
-
Tack, A (1)
-
Yaneva, V (1)
-
Yuan, Z (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This study investigates the implementation of a classroom response system in STEM education in a higher education context. The study used ExplainIt, a web-based classroom response system designed to support students’ self-explanations and provide instant feedback. Data were collected from 32 undergraduate students using four instruments including demographic information, self-efficacy, engagement, and system evaluation. The results showed that students reported positive learning experiences, demonstrated increased self-efficacy in STEM content, and indicated high levels of engagement following their use of ExplainIt.more » « lessFree, publicly-accessible full text available May 1, 2026
-
Carpenter, D; Min, W; Lee, S; Ozogul, G; Zheng, X; Lester, J (, Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications, Association for Computational Linguistics)Kochmar, E; Bexte, M; Burstein, J; Horbach, A; Laarmann-Quante, R; Tack, A; Yaneva, V; Yuan, Z (Ed.)The practice of soliciting self-explanations from students is widely recognized for its pedagogical benefits. However, the labor-intensive effort required to manually assess students’ explanations makes it impractical for classroom settings. As a result, many current solutions to gauge students’ understanding during class are often limited to multiple choice or fill-in-the-blank questions, which are less effective at exposing misconceptions or helping students to understand and integrate new concepts. Recent advances in large language models (LLMs) present an opportunity to assess student explanations in real-time, making explanation-based classroom response systems feasible for implementation. In this work, we investigate LLM-based approaches for assessing the correctness of students’ explanations in response to undergraduate computer science questions. We investigate alternative prompting approaches for multiple LLMs (i.e., Llama 2, GPT-3.5, and GPT-4) and compare their performance to FLAN-T5 models trained in a fine-tuning manner. The results suggest that the highest accuracy and weighted F1 score were achieved by fine-tuning FLAN-T5, while an in-context learning approach with GPT-4 attains the highest macro F1 score.more » « less
An official website of the United States government

Full Text Available