skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Influence of Essay Prompt Directedness on the Content Quality of Summary Writing
As an important step in the development of browser-based writing-to-learn software that provides immediate structural feedback, we seek ways to improve the quality of students essays and to optimize the software analysis algorithm. This quasi-experimental investigation compares the quality of students’ summary writing under three writing prompt conditions, otherwise identical prompts add either 0, 14, or 26 key terms. Results show that key terms matters substantially – the summary essays of those given the prompt without key terms had longer essays and the resulting networks of those essays were more like the expert referent and like their peers’ essays. Although tentative, these results indicate that writing prompts should NOT include key terms.  more » « less
Award ID(s):
2215807
PAR ID:
10618840
Author(s) / Creator(s):
;
Publisher / Repository:
AERA
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Sampson, Demetrious; Ifenthaler, Dirk; Isaías, Pedro (Ed.)
    This quasi-experimental study seeks to improve the conceptual quality of lesson summary essays by comparing two conditions, essay prompts with or without a list of concepts from the lesson. It is assumed that these terms can be used as “anchors” while writing. Participants (n = 90) in an Architectural Engineering undergraduate course over a two week period read the assigned textbook chapter and attended lectures and labs, then in the final lab session were asked to write a 300-word summary of the lesson content. Data for analysis consists of these essays and the end-of-unit multiple choice test. Compared to the expert essay benchmark, the essay networks of those receiving the list of terms in the writing prompt were not significantly different from those who did not receive the terms, but however were significantly more like peers essay networks, the network of the Chapter 11 PowerPoint lecture, and the network of the Chapter 9 PowerPoint lecture. In addition those receiving the list of terms in the writing prompt performed significantly better on the end-of-unit test than those not receiving the terms. Term frequency analysis indicates that only the most network central terms in the terms list showed a greater frequency in essays, the other terms frequencies were remarkably the same for both the Term and No Terms groups, suggesting a similar underlying conceptual mental model of this lesson content. More research is needed to understand how including concept terms in a writing prompt influences essay conceptual structure and test performance. 
    more » « less
  2. Hoadley, C; Wang, XC (Ed.)
    Helping students learn how to write is essential. However, students have few opportunities to develop this skill, since giving timely feedback is difficult for teachers. AI applications can provide quick feedback on students’ writing. But, ensuring accurate assessment can be challenging, since students’ writing quality can vary. We examined the impact of students’ writing quality on the error rate of our natural language processing (NLP) system when assessing scientific content in initial and revised design essays. We also explored whether aspects of writing quality were linked to the number of NLP errors. Despite finding that students’ revised essays were significantly different from their initial essays in a few ways, our NLP systems’ accuracy was similar. Further, our multiple regression analyses showed, overall, that students’ writing quality did not impact our NLP systems’ accuracy. This is promising in terms of ensuring students with different writing skills get similarly accurate feedback. 
    more » « less
  3. The ability to revise in response to feedback is critical to students' writing success. In the case of argument writing in specific, identifying whether an argument revision (AR) is successful or not is a complex problem because AR quality is dependent on the overall content of an argument. For example, adding the same evidence sentence could strengthen or weaken existing claims in different argument contexts (ACs). To address this issue we developed Chain-of-Thought prompts to facilitate ChatGPT-generated ACs for AR quality predictions. The experiments on two corpora, our annotated elementary essays and existing college essays benchmark, demonstrate the superiority of the proposed ACs over baselines. 
    more » « less
  4. Dziri, Nouha; Ren, Sean; Diao, Shizhe (Ed.)
    The ability to revise essays in response to feedback is important for students’ writing success. An automated writing evaluation (AWE) system that supports students in revising their essays is thus essential. We present eRevise+RF, an enhanced AWE system for assessing student essay revisions (e.g., changes made to an essay to improve its quality in response to essay feedback) and providing revision feedback. We deployed the system with 6 teachers and 406 students across 3 schools in Pennsylvania and Louisiana. The results confirmed its effectiveness in (1) assessing student essays in terms of evidence usage, (2) extracting evidence and reasoning revisions across essays, and (3) determining revision success in responding to feedback. The evaluation also suggested eRevise+RF is a helpful system for young students to improve their argumentative writing skills through revision and formative feedback. 
    more » « less
  5. This quasi-experimental investigation considers the influence of an instructor-led discussion of structural knowledge on the conceptual structure of summary essays from lesson to posttest. Undergraduate architectural engineering students, after completing the lecture portions on the topic Sustainability and Green Design, during lab time composed a 300-word summary essay using the online tool Graphical Interface of Knowledge Structure (GIKS, Authors, 2024, see Figure 1), then immediately one lab section participated in an instructor-led discussion of their group-average essay structure to note correct conceptions as well as common misconceptions, while the other two sections also wrote but did not have this discussion. Posttest essays were collected the following week. The instructor-led discussion of the networks relative to no discussion did improve posttest essay writing quality (human rater) but NOT content quality. The data indicates that the discussion altered students’ conceptual structures of the central terms in the expert network, but at the expense of peripheral, unmentioned terms. Therefore instructor-led discussion of content conceptual structure likely does influence students’ conceptual knowledge structures, and teachers and instructors must be vigilant in preparing and presenting such a discussion to make sure they appropriately and adequately cover the content. 
    more » « less