As an important step in the development of browser-based writing-to-learn software that provides immediate structural feedback, we seek ways to improve the quality of students essays and to optimize the software analysis algorithm. This quasi-experimental investigation compares the quality of students’ summary writing under three writing prompt conditions, otherwise identical prompts add either 0, 14, or 26 key terms. Results show that key terms matters substantially – the summary essays of those given the prompt without key terms had longer essays and the resulting networks of those essays were more like the expert referent and like their peers’ essays. Although tentative, these results indicate that writing prompts should NOT include key terms.
more »
« less
Predicting the Quality of Revisions in Argumentative Writing
The ability to revise in response to feedback is critical to students' writing success. In the case of argument writing in specific, identifying whether an argument revision (AR) is successful or not is a complex problem because AR quality is dependent on the overall content of an argument. For example, adding the same evidence sentence could strengthen or weaken existing claims in different argument contexts (ACs). To address this issue we developed Chain-of-Thought prompts to facilitate ChatGPT-generated ACs for AR quality predictions. The experiments on two corpora, our annotated elementary essays and existing college essays benchmark, demonstrate the superiority of the proposed ACs over baselines.
more »
« less
- Award ID(s):
- 2202347
- PAR ID:
- 10504441
- Publisher / Repository:
- Association for Computational Linguistics
- Date Published:
- Journal Name:
- 18th Workshop on Innovative Use of NLP for Building Educational Applications
- Page Range / eLocation ID:
- 275 to 287
- Format(s):
- Medium: X
- Location:
- Toronto, Canada
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students’ persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students wrote a pretest essay, engaged with W-Pal’s adaptive instruction over the course of four training sessions, and then completed a posttest essay. For each training session, W-Pal differentiated strategy instruction for each student based on specific weaknesses in the initial training essays prior to providing the opportunity to revise. The results indicated that essay quality improved overall from pretest to posttest with respect to holistic quality, as well as several specific dimensions of essay quality, particularly for students with lower literacy skills. Moreover, students’ scores on some of the training essays improved from the initial to revised version on the dimensions of essay quality that were targeted by instruction, whereas scores did not improve on the dimensions that were not targeted by instruction. Overall, the results suggest that W-Pal’s adaptive strategy instruction can improve the quality of students’ essays overall, as well as more specific dimensions of essay quality.more » « less
-
Hoadley, C; Wang, XC (Ed.)Helping students learn how to write is essential. However, students have few opportunities to develop this skill, since giving timely feedback is difficult for teachers. AI applications can provide quick feedback on students’ writing. But, ensuring accurate assessment can be challenging, since students’ writing quality can vary. We examined the impact of students’ writing quality on the error rate of our natural language processing (NLP) system when assessing scientific content in initial and revised design essays. We also explored whether aspects of writing quality were linked to the number of NLP errors. Despite finding that students’ revised essays were significantly different from their initial essays in a few ways, our NLP systems’ accuracy was similar. Further, our multiple regression analyses showed, overall, that students’ writing quality did not impact our NLP systems’ accuracy. This is promising in terms of ensuring students with different writing skills get similarly accurate feedback.more » « less
-
null (Ed.)We present a unique dataset of student source-based argument essays to facilitate research on the relations between content, argumentation skills, and assessment. Two classroom writing assignments were given to college students in a STEM major, accompanied by a carefully designed rubric. The paper presents a reliability study of the rubric, showing it to be highly reliable, and initial annotation on content and argumentation annotation of the essays.more » « less
-
Dziri, Nouha; Ren, Sean; Diao, Shizhe (Ed.)The ability to revise essays in response to feedback is important for students’ writing success. An automated writing evaluation (AWE) system that supports students in revising their essays is thus essential. We present eRevise+RF, an enhanced AWE system for assessing student essay revisions (e.g., changes made to an essay to improve its quality in response to essay feedback) and providing revision feedback. We deployed the system with 6 teachers and 406 students across 3 schools in Pennsylvania and Louisiana. The results confirmed its effectiveness in (1) assessing student essays in terms of evidence usage, (2) extracting evidence and reasoning revisions across essays, and (3) determining revision success in responding to feedback. The evaluation also suggested eRevise+RF is a helpful system for young students to improve their argumentative writing skills through revision and formative feedback.more » « less
An official website of the United States government

