skip to main content


Search for: All records

Award ID contains: 2118904

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Automatic short answer grading is an important research di- rection in the exploration of how to use artificial intelligence (AI)-based tools to improve education. Current state-of-the- art approaches use neural language models to create vector- ized representations of students responses, followed by clas- sifers to predict the score. However, these approaches have several key limitations, including i) they use pre-trained lan- guage models that are not well-adapted to educational sub- ject domains and/or student-generated text and ii) they al- most always train one model per question, ignoring the link- age across question and result in a significant model storage problem due to the size of advanced language models. In this paper, we study the problem of automatic short answer grad- ing for students’ responses to math questions and propose a novel framework for this task. First, we use MathBERT, a variant of the popular language model BERT adapted to mathematical content, as our base model and fine-tune it on the downstream task of student response grading. Sec- ond, we use an in-context learning approach that provides scoring examples as input to the language model to provide additional context information and promote generalization to previously unseen questions. We evaluate our framework on a real-world dataset of student responses to open-ended math questions and show that our framework (often signif- icantly) outperform existing approaches, especially for new questions that are not seen during training. 
    more » « less