skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 23, 2026

Title: Instructor-led Structural Knowledge Reflection and Conceptual Structure of Summary Essays
This quasi-experimental investigation considers the influence of an instructor-led discussion of structural knowledge on the conceptual structure of summary essays from lesson to posttest. Undergraduate architectural engineering students, after completing the lecture portions on the topic Sustainability and Green Design, during lab time composed a 300-word summary essay using the online tool Graphical Interface of Knowledge Structure (GIKS, Authors, 2024, see Figure 1), then immediately one lab section participated in an instructor-led discussion of their group-average essay structure to note correct conceptions as well as common misconceptions, while the other two sections also wrote but did not have this discussion. Posttest essays were collected the following week. The instructor-led discussion of the networks relative to no discussion did improve posttest essay writing quality (human rater) but NOT content quality. The data indicates that the discussion altered students’ conceptual structures of the central terms in the expert network, but at the expense of peripheral, unmentioned terms. Therefore instructor-led discussion of content conceptual structure likely does influence students’ conceptual knowledge structures, and teachers and instructors must be vigilant in preparing and presenting such a discussion to make sure they appropriately and adequately cover the content.  more » « less
Award ID(s):
2215807
PAR ID:
10620791
Author(s) / Creator(s):
;
Publisher / Repository:
American Educational Research Association (AERA)
Date Published:
Format(s):
Medium: X
Location:
Annual Meeting of the American Educational Research Association (AERA), Denver, CO, April 2025
Sponsoring Org:
National Science Foundation
More Like this
  1. Sampson, Demetrious; Ifenthaler, Dirk; Isaías, Pedro (Ed.)
    This quasi-experimental study seeks to improve the conceptual quality of lesson summary essays by comparing two conditions, essay prompts with or without a list of concepts from the lesson. It is assumed that these terms can be used as “anchors” while writing. Participants (n = 90) in an Architectural Engineering undergraduate course over a two week period read the assigned textbook chapter and attended lectures and labs, then in the final lab session were asked to write a 300-word summary of the lesson content. Data for analysis consists of these essays and the end-of-unit multiple choice test. Compared to the expert essay benchmark, the essay networks of those receiving the list of terms in the writing prompt were not significantly different from those who did not receive the terms, but however were significantly more like peers essay networks, the network of the Chapter 11 PowerPoint lecture, and the network of the Chapter 9 PowerPoint lecture. In addition those receiving the list of terms in the writing prompt performed significantly better on the end-of-unit test than those not receiving the terms. Term frequency analysis indicates that only the most network central terms in the terms list showed a greater frequency in essays, the other terms frequencies were remarkably the same for both the Term and No Terms groups, suggesting a similar underlying conceptual mental model of this lesson content. More research is needed to understand how including concept terms in a writing prompt influences essay conceptual structure and test performance. 
    more » « less
  2. Abstract Most STEM classrooms overlook the intrinsic conceptual structure of domain content, strategies for improving students’ conceptual structure have promise for improving STEM learning outcomes. This experimental investigation continues the development of the web-based toolGraphical Interface of Knowledge Structure (GIKS)that provides immediate formative feedback as a network of concepts in the student’s essays alongside an expert referent network for comparison and reflection. What should this feedback network look like, especially, should it be more inclusive or small and focused? And is preexisting domain knowledge important for type of network feedback effectiveness? Undergraduate students in a second year Architecture Engineering course, after completing a 2-weeks long lesson on Building with Wood, were randomly assigned to a summary writing task with either Full feedback (a network with 14 central and 12 peripheral terms) or Focused feedback (a network with only the 14 central terms), and then immediately completed a knowledge structure survey. Two weeks later, they completed an End-of-Unit posttest that consisted of a Central-items and a Peripheral-items subtests. A significant interaction of feedback and domain knowledge was observed for post knowledge structure, the low domain knowledge students in the Focus feedback group had the most central link-agreement with the expert and the least peripheral links agreement. On the End-of-Unit declarative knowledge posttest, there was no difference for the Full or Focused feedback interventions, but the high domain knowledge students in both interventions performed significantly better than the low domain knowledge students on the central-items subtest butnoton the peripheral-items subtest. This investigation shows the need for further research on the role of domain-normative central concepts and pragmatically contributes to the design of essay prompts for STEM classroom use. 
    more » « less
  3. The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students’ persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students wrote a pretest essay, engaged with W-Pal’s adaptive instruction over the course of four training sessions, and then completed a posttest essay. For each training session, W-Pal differentiated strategy instruction for each student based on specific weaknesses in the initial training essays prior to providing the opportunity to revise. The results indicated that essay quality improved overall from pretest to posttest with respect to holistic quality, as well as several specific dimensions of essay quality, particularly for students with lower literacy skills. Moreover, students’ scores on some of the training essays improved from the initial to revised version on the dimensions of essay quality that were targeted by instruction, whereas scores did not improve on the dimensions that were not targeted by instruction. Overall, the results suggest that W-Pal’s adaptive strategy instruction can improve the quality of students’ essays overall, as well as more specific dimensions of essay quality. 
    more » « less
  4. How does the conceptual structure of external representations contribute to learning? This investigation considered the influence of generative concept sorting (Study 1, n=58) and of external structure information (Study 2, n=120) moderated by perceived difficulty. In Study 1, undergraduate students completed a perceived difficulty survey and comprehension pretest, then a sorting task, and finally a comprehension posttest. Results showed that both perceived difficulty and comprehension pretest significantly predicted comprehension posttest performance. Learners who perceived that history is difficult attained significantly greater posttest scores and had more expert-like networks. In Study 2, participants completed the perceived difficulty survey and comprehension pretest, then read a text with different external structure support, either an expert network or an equivalent outline of the text, and finally completed a sorting task posttest and a comprehension posttest. In study 2, there was no significant difference for external structure support on posttest comprehension (outline = network), but reading with an outline led to a linear topic order conceptual structure of the text, while reading with a network led a more expert-like relational structure. As in Study 1, comprehension pretest and perceived difficulty significantly predicted posttest performance, but in contrast to Study 1, learners who perceived that history is easy attained significantly greater posttest scores. For theory building purposes, post-reading mental representations matched the form of the external representation used when reading. Practitioners should consider using generative sorting tasks when relearning history content. 
    more » « less
  5. Abstract As use of artificial intelligence (AI) has increased, concerns about AI bias and discrimination have been growing. This paper discusses an application called PyrEval in which natural language processing (NLP) was used to automate assessment and provide feedback on middle school science writing without linguistic discrimination. Linguistic discrimination in this study was operationalized as unfair assessment of scientific essays based on writing features that are not considered normative such as subject‐verb disagreement. Such unfair assessment is especially problematic when the purpose of assessment is not assessing English writing but rather assessing the content of scientific explanations. PyrEval was implemented in middle school science classrooms. Students explained their roller coaster design by stating relationships among such science concepts as potential energy, kinetic energy and law of conservation of energy. Initial and revised versions of scientific essays written by 307 eighth‐grade students were analyzed. Our manual and NLP assessment comparison analysis showed that PyrEval did not penalize student essays that contained non‐normative writing features. Repeated measures ANOVAs and GLMM analysis results revealed that essay quality significantly improved from initial to revised essays after receiving the NLP feedback, regardless of non‐normative writing features. Findings and implications are discussed. Practitioner notesWhat is already known about this topicAdvancement in AI has created a variety of opportunities in education, including automated assessment, but AI is not bias‐free.Automated writing assessment designed to improve students' scientific explanations has been studied.While limited, some studies reported biased performance of automated writing assessment tools, but without looking into actual linguistic features about which the tools may have discriminated.What this paper addsThis study conducted an actual examination of non‐normative linguistic features in essays written by middle school students to uncover how our NLP tool called PyrEval worked to assess them.PyrEval did not penalize essays containing non‐normative linguistic features.Regardless of non‐normative linguistic features, students' essay quality scores significantly improved from initial to revised essays after receiving feedback from PyrEval. Essay quality improvement was observed regardless of students' prior knowledge, school district and teacher variables.Implications for practice and/or policyThis paper inspires practitioners to attend to linguistic discrimination (re)produced by AI.This paper offers possibilities of using PyrEval as a reflection tool, to which human assessors compare their assessment and discover implicit bias against non‐normative linguistic features.PyrEval is available for use ongithub.com/psunlpgroup/PyrEvalv2. 
    more » « less