skip to main content


Title: Improving Automated Scoring of Student Open Responses in Mathematics
Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers on their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by introducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We find that this model outperforms previouslypublished benchmarks across three different metrics. With this model, we conduct an error analysis to examine characteristics of student responses that may be considered to further improve the method.  more » « less
Award ID(s):
1931419
NSF-PAR ID:
10292986
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
International Conference on Educational Data Mining
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers on their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by introducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We find that this model outperforms previously published benchmarks across three different metrics. With this model, we conduct an error analysis to examine characteristics of student responses that may be considered to further improve the method. 
    more » « less
  2. Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers on their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by introducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We find that this model outperforms previously published benchmarks across three different metrics. With this model, we conduct an error analysis to examine characteristics of student responses that may be considered to further improve the method. 
    more » « less
  3. Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper concep- tual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the pro- cesses and strategies adopted by students in formulating their responses. While these student responses help to in- form teachers on their students’ progress and understand- ing, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this rea- son, there has been a growing body of research in devel- oping AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by in- troducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic represen- tations. We find that this model outperforms previously- published benchmarks across three different metrics. With this model, we conduct an error analysis to examine char- acteristics of student responses that may be considered to further improve the method. 
    more » « less
  4. Open-ended questions in mathematics are commonly used by teachers to monitor and assess students' deeper concep- tual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the pro- cesses and strategies adopted by students in formulating their responses. While these student responses help to in- form teachers on their students' progress and understand- ing, the amount of variation in these responses can make it dicult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this rea- son, there has been a growing body of research in devel- oping AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by in- troducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic represen- tations. We nd that this model outperforms previously- published benchmarks across three di erent metrics. With this model, we conduct an error analysis to examine char- acteristics of student responses that may be considered to further improve the method. 
    more » « less
  5. Background: Teachers often rely on the use of open‐ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through more close‐ended problem types. While these types of problems are valuable to teachers, the variation in student responses to these questions makes it difficult, and time‐consuming, to evaluate and provide directed feedback. It is a well‐studied concept that feedback, both in terms of a numeric score but more importantly in the form of teacher‐authored comments, can help guide students as to how to improve, leading to increased learning. It is for this reason that teachers need better support not only for assessing students' work but also in providing meaningful and directed feedback to students. Objectives: In this paper, we seek to develop, evaluate, and examine machine learning models that support automated open response assessment and feedback. Methods: We build upon the prior research in the automatic assessment of student responses to open‐ended problems and introduce a novel approach that leverages student log data combined with machine learning and natural language processing methods. Utilizing sentence‐level semantic representations of student responses to open‐ended questions, we propose a collaborative filtering‐based approach to both predict student scores as well as recommend appropriate feedback messages for teachers to send to their students. Results and Conclusion: We find that our method outperforms previously published benchmarks across three different metrics for the task of predicting student performance. Through an error analysis, we identify several areas where future works maybe able to improve upon our approach. 
    more » « less