In this paper, we present a science writing assignment in which students focus on targeting specific audiences when writing about a socioscientific issue as well as participate in a peer review process. This assignment helps students consider inclusive science communication in their writing, focusing on engaging unique audiences about the intersections of science and social justice. Students are introduced to evidence-based tools for formulating communication for unique audiences as well as for assessment of writing quality. This assignment is novel in that it helps students think about inclusion issues in STEM, science writing, and peer review, all of which are key disciplinary skills that are not always included in STEM courses. While this assignment was piloted in chemistry and environmental engineering courses, this assignment could easily be modified for other disciplines.
more »
« less
Automated Support to Scaffold Students’ Written Explanations in Science
In principle, educators can use writing to scaffold students’ understanding of increasingly complex science ideas. In practice, formative assessment of students’ science writing is very labor intensive. We present PyrEval+CR, an automated tool for formative assessment of middle school students’ science essays. It identifies each idea in a student’s science essay, and its importance in the curriculum.
more »
« less
- Award ID(s):
- 2010483
- PAR ID:
- 10329342
- Date Published:
- Journal Name:
- International Conference on Artificial Intelligence in Education
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Martin Fred; Norouzi, Narges; Rosenthal, Stephanie (Ed.)This paper examines the use of LLMs to support the grading and explanation of short-answer formative assessments in K12 science topics. While significant work has been done on programmatically scoring well-structured student assessments in math and computer science, many of these approaches produce a numerical score and stop short of providing teachers and students with explanations for the assigned scores. In this paper, we investigate few-shot, in-context learning with chain-of-thought reasoning and active learning using GPT-4 for automated assessment of students’ answers in a middle school Earth Science curriculum. Our findings from this human-in-the-loop approach demonstrate success in scoring formative assessment responses and in providing meaningful explanations for the assigned score. We then perform a systematic analysis of the advantages and limitations of our approach. This research provides insight into how we can use human-in-the-loop methods for the continual improvement of automated grading for open-ended science assessments.more » « less
-
Teachers in small communities may be geographically isolated and have smaller collegial networks. Consequently, teachers in these settings may have limited exposure to contemporary strategies for engaging learners in science and engineering as suggested in the Next Generation Science Standards (NGSS). Thus, we provided a 5-day online PL experience and a year-long of modest supports (e.g., online professional learning community) to over 150 rural teachers from four states (CA, MT, ND, WY) to bridge the access gap and to enhance their instructional capabilities in teaching NGSS-aligned science and engineering lessons. Considering that the quality of the questions posed in a formative assessment impacts the quality of student thinking and what it reveals, we provided a formative assessment task, “Planning a Park” developed by Stanford NGSS Assessment Project (SNAP) and SCALE Science at WestEd, to participating teachers to implement in their classrooms. Teachers received online professional learning opportunities about the task before and after administering it in their classrooms. To understand their experiences with the task, we collected multiple data sources for triangulation, such as surveys about teachers’ preparedness to implement science lessons, teachers’ self-reported observations while delivering the task, their reflections about students’ performance, examples of student responses to the task, and interview responses from a sub-sample of teachers. As an initial analysis, we employed a descriptive coding process to capture teachers’ diverse experiences with the SCALE task (Saldaña, 2021). In this session, we will report rural teachers’ experiences with the formative assessment task that was provided as part of a year of modest supports. We believe this study will support the science education community, especially individuals preparing teachers to teach science and researchers on assessment, by sharing the benefits of implementing a formative assessment task during inservice teachers’ professional learning.more » « less
-
With the increasing use of online interactive environments for science and engineering education in grades K-12, there is a growing need for detailed automatic analysis of student explanations of ideas and reasoning. With the widespread adoption of the Next Generation Science Standards (NGSS), an important goal is identifying the alignment of student ideas with NGSS-defined dimensions of proficiency. We develop a set of constructed response formative assessment items that call for students to express and integrate ideas across multiple dimensions of the NGSS and explore the effectiveness of state-of-the-art neural sequence-labeling methods for identifying discourse-level expressions of ideas that align with the NGSS. We discuss challenges for idea detection task in the formative science assessment context.more » « less
-
Abstract In response to Li, Reigh, He, and Miller's commentary,Can we and should we use artificial intelligence for formative assessment in science, we argue that artificial intelligence (AI) is already being widely employed in formative assessment across various educational contexts. While agreeing with Li et al.'s call for further studies on equity issues related to AI, we emphasize the need for science educators to adapt to the AI revolution that has outpaced the research community. We challenge the somewhat restrictive view of formative assessment presented by Li et al., highlighting the significant contributions of AI in providing formative feedback to students, assisting teachers in assessment practices, and aiding in instructional decisions. We contend that AI‐generated scores should not be equated with the entirety of formative assessment practice; no single assessment tool can capture all aspects of student thinking and backgrounds. We address concerns raised by Li et al. regarding AI bias and emphasize the importance of empirical testing and evidence‐based arguments in referring to bias. We assert that AI‐based formative assessment does not necessarily lead to inequity and can, in fact, contribute to more equitable educational experiences. Furthermore, we discuss how AI can facilitate the diversification of representational modalities in assessment practices and highlight the potential benefits of AI in saving teachers’ time and providing them with valuable assessment information. We call for a shift in perspective, from viewing AI as a problem to be solved to recognizing its potential as a collaborative tool in education. We emphasize the need for future research to focus on the effective integration of AI in classrooms, teacher education, and the development of AI systems that can adapt to diverse teaching and learning contexts. We conclude by underlining the importance of addressing AI bias, understanding its implications, and developing guidelines for best practices in AI‐based formative assessment.more » « less
An official website of the United States government

