The process of synthesizing solutions for mathematical problems is cognitively complex. Students formulate and implement strate- gies to solve mathematical problems, develop solutions, and make connections between their learned concepts as they apply their reasoning skills to solve such problems. The gaps in student knowl- edge or shallowly-learned concepts may cause students to guess at answers or otherwise apply the wrong approach, resulting in errors in their solutions. Despite the complexity of the synthesis process in mathematics learning, teachers’ knowledge and ability to anticipate areas of potential difficulty is essential and correlated with student learning outcomes. Preemptively identifying the common miscon- ceptions in students that result in subsequent incorrect attempts can be arduous and unreliable, even for experienced teachers. This pa- per aims to help teachers identify the subsequent incorrect attempts that commonly occur when students are working on math problems such that they can address the underlying gaps in knowledge and common misconceptions through feedback. We report on a longi- tudinal analysis of historical data, from a computer-based learning platform, exploring the incorrect answers in the prior school years (’15-’20) that establish the commonality of wrong answers on two Open Educational Resources (OER) curricula–Illustrative Math (IM) and EngageNY (ENY) for grades 6, 7, and 8. We observe that incor- rect answers are pervasive across 5 academic years despite changes in underlying student and teacher population. Building on our find- ings regarding the Common Wrong Answers (CWAs), we report on goals and task analysis that we leveraged in designing and develop- ing a crowdsourcing platform for teachers to write Common Wrong Answer Feedback (CWAF) aimed are remediating the underlying cause of the CWAs. Finally, we report on an in vivo study by analyz- ing the effectiveness of CWAFs using two approaches; first, we use next-problem-correctness as a dependent measure after receiving CWAF in an intent-to-treat second, using next-attempt correctness as a dependent measure after receiving CWAF in a treated analysis. With the rise in popularity and usage of computer-based learning platforms, this paper explores the potential benefits of scalability in identifying CWAs and the subsequent usage of crowd-sourced CWAFs in enhancing the student learning experience through re- mediation.
more »
« less
Assessing Bootstrap: Algebra Students on Scaffolded and Unscaffolded Word Problems
Bootstrap:Algebra is a curricular module designed to integrate introductory computing into an algebra class; the module aims to help students improve on various essential learning outcomes from state and national algebra standards. In prior work, we published initial findings about student performance gains on algebra problems after taking Bootstrap. While the results were promising, the dataset was not large, and had students working on algebra problems that had been scaffolded with Bootstrap's pedagogy. This paper reports on a more detailed study with (a) data from more than three times as many students, (b) analysis of performance changes in incorrect answers, (c) some problems in which the Bootstrap scaffolds have been removed, and (d) an IRT analysis across the elements of Bootstrap's program-design pedagogy. Our results confirm that students improve on algebraic word problems after completing the module, even on unscaffolded problems. The nature of incorrect answers to symbolic-form questions also appears to improve after Bootstrap.
more »
« less
- Award ID(s):
- 1535276
- PAR ID:
- 10072911
- Date Published:
- Journal Name:
- Proceedings of the 49th ACM Technical Symposium on Computer Science Education
- Page Range / eLocation ID:
- 8 to 13
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Prior work analyzing tutoring sessions provided evidence that highly effective tutors, through their interaction with students and their experience, can perceptively recognize incorrect processes or “bugs” when students incorrectly answer problems. Researchers have studied these tutoring interactions examining instructional approaches to address incorrect processes and observed that the format of the feedback can influence learning outcomes. In this work, we recognize the incorrect answers caused by these buggy processes as Common Wrong Answers (CWAs). We examine the ability of teachers and instructional designers to identify CWAs proactively. As teachers and instructional designers deeply understand the common approaches and mistakes students make when solving mathematical problems, we examine the feasibility of proactively identifying CWAs and generating Common Wrong Answer Feedback (CWAFs) as a formative feedback intervention for addressing student learning needs. As such, we analyze CWAFs in three sets of analyses. We first report on the accuracy of the CWAs predicted by the teachers and instructional designers on the problems across two activities. We then measure the effectiveness of the CWAFs using an intent-to-treat analysis. Finally, we explore the existence of personalization effects of the CWAFs for the students working on the two mathematics activities.more » « less
-
In order to facilitate student learning, it is important to identify and remediate misconceptions and incomplete knowledge pertaining to the assigned material. In the domain of mathematics, prior research with computer-based learning systems has utilized the commonality of incorrect answers to problems as a way of identifying potential misconceptions among students. Much of this research, however, has been limited to the use of close-ended questions, such as multiple-choice and fill-in-the-blank problems. In this study, we explore the potential usage of natural language processing and clustering methods to examine potential misconceptions across student answers to both close- and openended problems. We find that our proposed methods show promise for distinguishing misconception from non-conception, but may need further development to improve the interpretability of specific misunderstandings exhibited through student explanations.more » « less
-
Abstract Engaging students with well-designed multiple-choice questions during class and asking them to discuss their answers with their peers after each student has contemplated the response individually can be an effective evidence-based active-engagement pedagogy in physics courses. Moreover, validated sequences of multiple-choice questions are more likely to help students build a good knowledge structure of physics than individual multiple-choice questions on various topics. Here we discuss a framework to develop robust sequences of multiple-choice questions and then use the framework for the development, validation and implementation of a sequence of multiple-choice questions focusing on helping students learn quantum mechanics via the Stern–Gerlach experiment (SGE) that takes advantage of the guided inquiry-based learning sequences in an interactive tutorial on the same topic. The extensive research in developing and validating the multiple-choice question sequence (MQS) strives to make it effective for students with diverse prior preparation in upper-level undergraduate quantum physics courses. We discuss student performance on assessment task focusing on the SGE after traditional lecture-based instruction versus after engaging with the research-validated MQS administered as clicker questions in which students had the opportunity to discuss their responses with their peers.more » « less
-
Many students tend to provide intuitively appealing (but incorrect) responses to some physics questions despite demonstrating (on isomorphic questions) the formal knowledge necessary to reason correctly. These inconsistencies in reasoning are persistent and remain even after evidence-based instruction. This project probed whether a collaborative group exam could serve not only as an innovative assessment tool but also as an instructional intervention that helps address persistent reasoning difficulties. Specifically, students were given opportunities to revisit their answers to questions known to elicit intuitively appealing responses in a collaborative group exam component immediately following a traditional individual exam. The efficacy of this approach was compared to that of a more traditional instructor-led exam review session. Both approaches yielded moderate improvements in performance on the final exam. However, additional multi-faceted data analysis provided further insights into student reasoning difficulties that suggested further implication for instruction and research.more » « less
An official website of the United States government

