skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Black Linguistic Justice from Theory to Practice
While writing studies and linguistic scholarship has interrogated race and college writing instruction over the last fifty years, we contend that explicit, actionable, and supportive guidance on giving feedback to Black students’ writing is still needed. Building on the legacy of work visible in theStudents’ Right to Their Own Languageoriginal (Conference on College Composition and Communication, 1974) and updated (2006) annotated bibliography, as well as the crucial work done since then, our interdisciplinary team of linguists and writing studies scholars and students constructed the Students’ Right to Their Own Writing website. We describe the research-based design of the website and share evaluations of the website from focus group sessions. Acknowledging the contingent and overburdened nature of the labor force in most writing programs, the focus group participants particularly appreciated the infographics, how-tos and how-not-tos, and samples of feedback. The result is a demonstration of how to actually take up the call to enact Black Linguistic Justice (Baker-Bell et al., “This Ain’t Another Statement”).  more » « less
Award ID(s):
2126414 2126405 2228283
PAR ID:
10559587
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
National Council of Teachers of English
Date Published:
Journal Name:
College Composition & Communication
Volume:
75
Issue:
4
ISSN:
0010-096X
Page Range / eLocation ID:
647 to 674
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Lewis, Scott (Ed.)
    Research on student learning in organic chemistry indicates that students tend to focus on surface level features of molecules with less consideration of implicit properties when engaging in mechanistic reasoning. Writing-to-learn (WTL) is one approach for supporting students’ mechanistic reasoning. A variation of WTL incorporates peer review and revision to provide opportunities for students to interact with and learn from their peers, as well as revisit and reflect on their own knowledge and reasoning. However, research indicates that the rhetorical features included in WTL assignments may influence the language students use in their responses. This study utilizes machine learning to characterize the mechanistic features present in second-semester undergraduate organic chemistry students’ responses to two versions of a WTL assignment with different rhetorical features. Furthermore, we examine the role of peer review on the mechanistic reasoning captured in students’ revised drafts. Our analysis indicates that students include both surface level and implicit features of mechanistic reasoning in their drafts and in the feedback to their peers, with slight differences depending on the rhetorical features present in the assignment. However, students’ revisions appeared to be primarily connected to the peer review processviathe presence of surface features in the drafts students read (as opposed to the feedback received). These findings indicate that further scaffolding focused on how to utilize information gained from the peer review process (i.e., both feedback received and drafts read) and emphasizing implicit properties could help support the utility of WTL for developing students’ mechanistic reasoning in organic chemistry. 
    more » « less
  2. Abstract As use of artificial intelligence (AI) has increased, concerns about AI bias and discrimination have been growing. This paper discusses an application called PyrEval in which natural language processing (NLP) was used to automate assessment and provide feedback on middle school science writing without linguistic discrimination. Linguistic discrimination in this study was operationalized as unfair assessment of scientific essays based on writing features that are not considered normative such as subject‐verb disagreement. Such unfair assessment is especially problematic when the purpose of assessment is not assessing English writing but rather assessing the content of scientific explanations. PyrEval was implemented in middle school science classrooms. Students explained their roller coaster design by stating relationships among such science concepts as potential energy, kinetic energy and law of conservation of energy. Initial and revised versions of scientific essays written by 307 eighth‐grade students were analyzed. Our manual and NLP assessment comparison analysis showed that PyrEval did not penalize student essays that contained non‐normative writing features. Repeated measures ANOVAs and GLMM analysis results revealed that essay quality significantly improved from initial to revised essays after receiving the NLP feedback, regardless of non‐normative writing features. Findings and implications are discussed. Practitioner notesWhat is already known about this topicAdvancement in AI has created a variety of opportunities in education, including automated assessment, but AI is not bias‐free.Automated writing assessment designed to improve students' scientific explanations has been studied.While limited, some studies reported biased performance of automated writing assessment tools, but without looking into actual linguistic features about which the tools may have discriminated.What this paper addsThis study conducted an actual examination of non‐normative linguistic features in essays written by middle school students to uncover how our NLP tool called PyrEval worked to assess them.PyrEval did not penalize essays containing non‐normative linguistic features.Regardless of non‐normative linguistic features, students' essay quality scores significantly improved from initial to revised essays after receiving feedback from PyrEval. Essay quality improvement was observed regardless of students' prior knowledge, school district and teacher variables.Implications for practice and/or policyThis paper inspires practitioners to attend to linguistic discrimination (re)produced by AI.This paper offers possibilities of using PyrEval as a reflection tool, to which human assessors compare their assessment and discover implicit bias against non‐normative linguistic features.PyrEval is available for use ongithub.com/psunlpgroup/PyrEvalv2. 
    more » « less
  3. Cristea, Alexandra; Walker, Erin; Lu, Yu; Santos, Olga (Ed.)
    This project examines the prospect of using AI-generated feedback as suggestions to expedite and enhance human instructors’ feedback provision. In particular, we focus on understanding the teaching assistants’ perspectives on the quality of AI-generated feedback and how they may or may not utilize AI feedback in their own workflows. We situate our work in a foundational college Economics class, which has frequent short essay assignments. We developed an LLM-powered feedback engine that generates feedback on students’ essays based on grading rubrics used by the teaching assistants (TAs). To ensure that TAs can meaningfully critique and engage with the AI feedback, we had them complete their regular grading jobs. For a randomly selected set of essays that they had graded, we used our feedback engine to generate feedback and displayed the feedback as in-text comments in a Word document. We then performed think-aloud studies with 5 TAs over 20 1-hour sessions to have them evaluate the AI feedback, contrast the AI feedback with their handwritten feedback, and share how they envision using the AI feedback if they were offered as suggestions. The study highlights the importance of providing detailed rubrics for AI to generate high-quality feedback for knowledge-intensive essays. TAs considered that using AI feedback as suggestions during their grading could expedite grading, enhance consistency, and improve overall feedback quality. We discuss the importance of decomposing the feedback generation task into steps and presenting intermediate results, in order for TAs to use the AI feedback. 
    more » « less
  4. Cristea, Alexandra; Walker, Erin; Lu, Yu; Santos, Olga (Ed.)
    This project examines the prospect of using AI-generated feedback as suggestions to expedite and enhance human instructors’ feedback provision. In particular, we focus on understanding the teaching assistants’ perspectives on the quality of AI-generated feedback and how they may or may not utilize AI feedback in their own workflows. We situate our work in a foundational college Economics class, which has frequent short essay assignments. We developed an LLM-powered feedback engine that generates feedback on students’ essays based on grading rubrics used by the teaching assistants (TAs). To ensure that TAs can meaningfully critique and engage with the AI feedback, we had them complete their regular grading jobs. For a randomly selected set of essays that they had graded, we used our feedback engine to generate feedback and displayed the feedback as in-text comments in a Word document. We then performed think-aloud studies with 5 TAs over 20 1-hour sessions to have them evaluate the AI feedback, contrast the AI feedback with their handwritten feedback, and share how they envision using the AI feedback if they were offered as suggestions. The study highlights the importance of providing detailed rubrics for AI to generate high-quality feedback for knowledge-intensive essays. TAs considered that using AI feedback as suggestions during their grading could expedite grading, enhance consistency, and improve overall feedback quality. We discuss the importance of decomposing the feedback generation task into steps and presenting intermediate results, in order for TAs to use the AI feedback. 
    more » « less
  5. This study aims to identify the linguistic feature characteristics of multiple writing assignments completed by engineering undergraduates, including entry-level engineering laboratory reports and writing produced in non-engineering courses. We used Biber’s multidimensional analysis (MDA) method as the analysis tool for the student writing artifacts. MDA is a corpus-analysis methodology that utilizes language processing software to analyze text by parts of speech (e.g. nouns, verbs, prepositions, etc.). MDA typically identifies six “dimensions” of linguistic features that a text may perform in, and each dimension is rated along a continuum. The dimensions used in this study include Dimension 1: Informational vs involved, Dimension 3: Context dependence, Dimension 4: Overt persuasion, and Dimension 5: Abstract vs. non-abstract information. In AY 2019-2020, total of 97 student artifacts (N = 97) were collected. For this analysis, we grouped documents into similar assignment genres: research-papers (n = 45), technical reports and analyses (n = 7) and engineering laboratory reports (n = 35), with individual engineering students represented at least once in the laboratory report and once in another category. Findings showed that engineering lab reports are highly informational, minimally-persuasive, and used deferred elaboration. Students’ research papers in academic writing courses, conversely, were highly involved, highly persuasive, and featured more immediate elaboration on claims and data. The analyses above indicate that students are generally performing as expected in lab report writing in entry-level engineering lab classes, and that this performance is markedly different from their earlier academic writing courses, such as first-year-composition (FYC) and technical communication/writing, indicating that students are not merely “writing like engineers” from their first day at college. However, similarities in context dependence suggest that engineering students must still learn to modulate their languages in writing dramatically depending on the writing assignment. While some students show little growth from one context to another, others are able to change their register or other linguistic/structural features to meet the needs of their audience. 
    more » « less