skip to main content


Search for: All records

Award ID contains: 1812660

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Chinn, C. ; Tan, E. ; Chan, C. ; Kali, Y. (Ed.)
  2. null (Ed.)
    Models for automated scoring of content in educational applications continue to demonstrate improvements in human-machine agreement, but it remains to be demonstrated that the models achieve gains for the “right” reasons. For providing reliable scoring and feedback, both high accuracy and construct coverage are crucial. In this work, we provide an in-depth quantitative and qualitative analysis of automated scoring models for science explanations of middle school students in an online learning environment that leverages saliency maps to explore the reasons for individual model score predictions. Our analysis reveals that top-performing models can arrive at the same predictions for very different reasons, and that current model architectures have difficulty detecting ideas in student responses beyond keywords. 
    more » « less
  3. null (Ed.)
    Models for automated scoring of content in educational applications continue to demonstrate improvements in human-machine agreement, but it remains to be demonstrated that the models achieve gains for the “right” reasons. For providing reliable scoring and feedback, both high accuracy and connecting scoring decisions to scoring rubrics are crucial. We provide a quantitative and qualitative analysis of automated scoring models for science explanations of middle school students in an online learning environment that leverages saliency maps to explore the reasons for individual model score predictions. Our analysis reveals that top-performing models can arrive at the same predictions for very different reasons, and that current model architectures have difficulty detecting ideas in student responses beyond keywords. 
    more » « less
  4. null (Ed.)
    Recent work on automated scoring of student responses in educational applications has shown gains in human-machine agreement from neural models, particularly recurrent neural networks (RNNs) and pre-trained transformer (PT) models. However, prior research has neglected investigating the reasons for improvement – in particular, whether models achieve gains for the “right” reasons. Through expert analysis of saliency maps, we analyze the extent to which models attribute importance to words and phrases in student responses that align with question rubrics. We focus on responses to questions that are embedded in science units for middle school students accessed via an online classroom system. RNN and PT models were trained to predict an ordinal score from each response’s text, and experts analyzed generated saliency maps for each response. Our analysis shows that RNN and PT-based models can produce substantially different saliency profiles while often predicting the same scores for the same student responses. While there is some indication that PT models are better able to avoid spurious correlations of high frequency words with scores, results indicate that both models focus on learning statistical correlations between scores and words and do not demonstrate an ability to learn key phrases or longer linguistic units corresponding to ideas, which are targeted by question rubrics. These results point to a need for models to better capture student ideas in educational applications. 
    more » « less
  5. The Next Generation Science Standards (NGSS) emphasize integrating three dimensions of science learning: disciplinary core ideas, cross-cutting concepts, and science and engineering practices. In this study, we develop formative assessments that measure student understanding of the integration of these three dimensions along with automated scoring methods that distinguish among them. The formative assessments allow students to express their emerging ideas while also capturing progress in integrating core ideas, cross-cutting concepts, and practices. We describe how item and rubric design can work in concert with an automated scoring system to independently score science explanations from multiple perspectives. We describe item design considerations and provide validity evidence for the automated scores. 
    more » « less
  6. With the increasing use of online interactive environments for science and engineering education in grades K-12, there is a growing need for detailed automatic analysis of student explanations of ideas and reasoning. With the widespread adoption of the Next Generation Science Standards (NGSS), an important goal is identifying the alignment of student ideas with NGSS-defined dimensions of proficiency. We develop a set of constructed response formative assessment items that call for students to express and integrate ideas across multiple dimensions of the NGSS and explore the effectiveness of state-of-the-art neural sequence-labeling methods for identifying discourse-level expressions of ideas that align with the NGSS. We discuss challenges for idea detection task in the formative science assessment context. 
    more » « less
  7. With the widespread adoption of the Next Generation Science Standards (NGSS), science teachers and online learning environments face the challenge of evaluating students' integration of different dimensions of science learning. Recent advances in representation learning in natural language processing have proven effective across many natural language processing tasks, but a rigorous evaluation of the relative merits of these methods for scoring complex constructed response formative assessments has not previously been carried out. We present a detailed empirical investigation of feature-based, recurrent neural network, and pre-trained transformer models on scoring content in real-world formative assessment data. We demonstrate that recent neural methods can rival or exceed the performance of feature-based methods. We also provide evidence that different classes of neural models take advantage of different learning cues, and pre-trained transformer models may be more robust to spurious, dataset-specific learning cues, better reflecting scoring rubrics. 
    more » « less