skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Applying a mathematical sense-making framework to student work and its potential for curriculum design
This paper extends prior work establishing an operationalized framework of mathematical sense making (MSM) in physics. The framework differentiates between the object being understood (either physical or mathematical) and various tools (physical or mathematical) used to mediate the sense-making process. This results in four modes of MSM that can be coordinated and linked in various ways. Here, the framework is applied to novel modalities of student written work (both short answer and multiple choice). In detailed studies of student reasoning about the photoelectric effect, we associate these MSM modes with particular multiple choice answers, and substantiate this association by linking both the MSM modes and multiple choice answers with finer-grained reasoning elements that students use in solving a specific problem. Through the multiple associations between MSM mode, distributions of reasoning elements, and multiple- choice answers, we confirm the applicability of this framework to analyzing these sparser modalities of student work and its utility for analyzing larger-scale (N > 100) datasets. The association between individual reasoning elements and both MSM modes and MC answers suggest that it is possible to cue particular modes of student reasoning and answer selection. Such findings suggest potential for this framework to be applicable to the analysis and design of curriculum.  more » « less
Award ID(s):
1625824
PAR ID:
10230708
Author(s) / Creator(s):
Date Published:
Journal Name:
Physical review
Volume:
17
ISSN:
2469-9896
Page Range / eLocation ID:
010138
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We present a framework designed to help categorize various sense making moves, allowing for greater specificity in describing and understanding student reasoning and also in the development of curriculum to support this reasoning. The framework disaggregates between the mechanisms of student reasoning (the cognitive tool that they are employing) and what they are reasoning about (the object). Noting that either the tool or object could be mathematical or physical, the framework includes four basic sense making modes: Use of a mathematical tool to understand a mathematical object, use of a mathematical tool to understand a physical object, use of a physical tool to understand a mathematical object, and use of a physical tool to understand a physical object. We identify three fundamental processes by which these modes may be combined (translation, chaining, and coordination) and present a visual representation that captures both the individual reasoning modes and the processes by which they are combined. The utility of the framework as a tool for describing student reasoning is demonstrated through the analysis of two extended reasoning episodes. Finally, implications of this framework for curricular design are discussed. 
    more » « less
  2. Teachers often rely on the use of a range of open-ended problems to assess students’ understanding of mathematical concepts. Beyond traditional conceptions of student openended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended work common in mathematics. While recent developments in areas of natural language processing and machine learning have led to automated methods to score student open-ended work, these methods have largely been limited to textual answers. Several computer-based learning systems allow students to take pictures of hand-written work and include such images within their answers to open-ended questions. With that, however, there are few-to-no existing solutions that support the auto-scoring of student hand-written or drawn answers to questions. In this work, we build upon an existing method for auto-scoring textual student answers and explore the use of OpenAI/CLIP, a deep learning embedding method designed to represent both images and text, as well as Optical Character Recognition (OCR) to improve model performance. We evaluate the performance of our method on a dataset of student open-responses that contains both text- and image-based responses, and find a reduction of model error in the presence of images when controlling for other answer-level features. 
    more » « less
  3. Teachers often rely on the use of a range of open-ended problems to assess students' understanding of mathematical concepts. Beyond traditional conceptions of student open-ended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended work common in mathematics. While recent developments in areas of natural language processing and machine learning have led to automated methods to score student open-ended work, these methods have largely been limited to textual answers. Several computer-based learning systems allow students to take pictures of hand-written work and include such images within their answers to open-ended questions. With that, however, there are few-to-no existing solutions that support the auto-scoring of student hand-written or drawn answers to questions. In this work, we build upon an existing method for auto-scoring textual student answers and explore the use of OpenAI/CLIP, a deep learning embedding method designed to represent both images and text, as well as Optical Character Recognition (OCR) to improve model performance. We evaluate the performance of our method on a dataset of student open-responses that contains both text- and image-based responses, and find a reduction of model error in the presence of images when controlling for other answer-level features. 
    more » « less
  4. Teachers often rely on the use of a range of open-ended problems to assess students’ understanding of mathematical concepts. Beyond traditional conceptions of student open- ended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended work common in mathematics. While recent developments in areas of natural language processing and machine learning have led to automated methods to score student open-ended work, these methods have largely been limited to textual an- swers. Several computer-based learning systems allow stu- dents to take pictures of hand-written work and include such images within their answers to open-ended questions. With that, however, there are few-to-no existing solutions that support the auto-scoring of student hand-written or drawn answers to questions. In this work, we build upon an ex- isting method for auto-scoring textual student answers and explore the use of OpenAI/CLIP, a deep learning embedding method designed to represent both images and text, as well as Optical Character Recognition (OCR) to improve model performance. We evaluate the performance of our method on a dataset of student open-responses that contains both text- and image-based responses, and find a reduction of model error in the presence of images when controlling for other answer-level features. 
    more » « less
  5. Teachers often rely on the use of a range of open-ended problems to assess students’ understanding of mathematical concepts. Beyond traditional conceptions of student open- ended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended work common in mathematics. While recent developments in areas of natural language processing and machine learning have led to automated methods to score student open-ended work, these methods have largely been limited to textual an- swers. Several computer-based learning systems allow stu- dents to take pictures of hand-written work and include such images within their answers to open-ended questions. With that, however, there are few-to-no existing solutions that support the auto-scoring of student hand-written or drawn answers to questions. In this work, we build upon an ex- isting method for auto-scoring textual student answers and explore the use of OpenAI/CLIP, a deep learning embedding method designed to represent both images and text, as well as Optical Character Recognition (OCR) to improve model performance. We evaluate the performance of our method on a dataset of student open-responses that contains both text- and image-based responses, and find a reduction of model error in the presence of images when controlling for other answer-level features. 
    more » « less