Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available November 4, 2025
-
This work investigates relationships between consistent attendance —attendance rates in a group that maintains the same tutor and students across the school year— and learning in small group tutoring sessions. We analyzed data from two large urban districts consisting of 206 9th-grade student groups (3 − 6 students per group) for a total of 803 students and 75 tutors. The students attended small group tutorials approximately every other day during the school year and completed a pre and post-assessment of math skills at the start and end of the year, respectively. First, we found that the attendance rates of the group predicted individual assessment scores better than the individual attendance rates of students comprising that group. Second, we found that groups with high consistent attendance had more frequent and diverse tutor and student talk centering around rich mathematical discussions. Whereas we emphasize that changing tutors or groups might be necessary, our findings suggest that consistently attending tutorial sessions as a group with the same tutor might lead the group to implicitly learn as a team despite not being one.more » « lessFree, publicly-accessible full text available June 22, 2025
-
Free, publicly-accessible full text available April 1, 2025
-
Free, publicly-accessible full text available June 4, 2025
-
Benjamin, Paaßen ; Carrie, Demmans Epp (Ed.)One of the areas where Large Language Models (LLMs) show promise is for automated qualitative coding, typically framed as a text classification task in natural language processing (NLP). Their demonstrated ability to leverage in-context learning to operate well even in data-scarce settings poses the question of whether collecting and annotating large-scale data for training qualitative coding models is still beneficial. In this paper, we empirically investigate the performance of LLMs designed for use in prompting-based in-context learning settings, and draw a comparison to models that have been trained using the traditional pretraining--finetuning paradigm with task-specific annotated data, specifically for tasks involving qualitative coding of classroom dialog. Compared to other domains where NLP studies are typically situated, classroom dialog is much more natural and therefore messier. Moreover, tasks in this domain are nuanced and theoretically grounded and require a deep understanding of the conversational context. We provide a comprehensive evaluation across five datasets, including tasks such as talkmove prediction and collaborative problem solving skill identification. Our findings show that task-specific finetuning strongly outperforms in-context learning, showing the continuing need for high-quality annotated training datasets.more » « lessFree, publicly-accessible full text available January 1, 2025
-
Free, publicly-accessible full text available March 18, 2025