skip to main content


Title: Developing Algebraic Conceptual Understanding: Can procedural knowledge get in the way?
In this study we use latent class analysis, distractor analysis, and qualitative analysis of cognitive interviews of student responses to questions on an algebra concept inventory, in order to generate theories about how students’ selections of specific answer choices may reflect different stages or types of algebraic conceptual understanding. Our analysis reveals three groups of students in elementary algebra courses, which we label as “mostly random guessing”, “some procedural fluency with key misconceptions”, and “procedural fluency with emergent conceptual understanding”. Student responses also revealed high rates of misconceptions that stem from misuse or misunderstanding of procedures, and whose prevalence often correlates with higher levels of procedural fluency.  more » « less
Award ID(s):
1760491
PAR ID:
10112119
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the 22nd Annual Conference on Research in Undergraduate Mathematics Education
Page Range / eLocation ID:
688-695
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In teaching mechanics, we use multiple representations of vectors to develop concepts and analysis techniques. These representations include pictorials, diagrams, symbols, numbers and narrative language. Through years of study as students, researchers, and teachers, we develop a fluency rooted in a deep conceptual understanding of what each representation communicates. Many novice learners, however, struggle to gain such understanding and rely on superficial mimicry of the problem solving procedures we demonstrate in examples. The term representational competence refers to the ability to interpret, switch between, and use multiple representations of a concept as appropriate for learning, communication and analysis. In engineering statics, an understanding of what each vector representation communicates and how to use different representations in problem solving is important to the development of both conceptual and procedural knowledge. Science education literature identifies representational competence as a marker of true conceptual understanding. This paper presents development work for a new assessment instrument designed to measure representational competence with vectors in an engineering mechanics context. We developed the assessment over two successive terms in statics courses at a community college, a medium-sized regional university, and a large state university. We started with twelve multiple-choice questions that survey the vector representations commonly employed in statics. Each question requires the student to interpret and/or use two or more different representations of vectors and requires no calculation beyond single digit integer arithmetic. Distractor answer choices include common student mistakes and misconceptions drawn from the literature and from our teaching experience. We piloted these twelve questions as a timed section of the first exam in fall 2018 statics courses at both Whatcom Community College (WCC) and Western Washington University. Analysis of students’ unprompted use of vector representations on the open-ended problem-solving section of the same exam provides evidence of the assessment’s validity as a measurement instrument for representational competence. We found a positive correlation between students’ accurate and effective use of representations and their score on the multiple choice test. We gathered additional validity evidence by reviewing student responses on an exam wrapper reflection. We used item difficulty and item discrimination scores (point-biserial correlation) to eliminate two questions and revised the remaining questions to improve clarity and discriminatory power. We administered the revised version in two contexts: (1) again as part of the first exam in the winter 2019 Statics course at WCC, and (2) as an extra credit opportunity for statics students at Utah State University. This paper includes sample questions from the assessment to illustrate the approach. The full assessment is available to interested instructors and researchers through an online tool. 
    more » « less
  2. In teaching mechanics, we use multiple representations of vectors to develop concepts and analysis techniques. These representations include pictorials, diagrams, symbols, numbers and narrative language. Through years of study as students, researchers, and teachers, we develop a fluency rooted in a deep conceptual understanding of what each representation communicates. Many novice learners, however, struggle to gain such understanding and rely on superficial mimicry of the problem solving procedures we demonstrate in examples. The term representational competence refers to the ability to interpret, switch between, and use multiple representations of a concept as appropriate for learning, communication and analysis. In engineering statics, an understanding of what each vector representation communicates and how to use different representations in problem solving is important to the development of both conceptual and procedural knowledge. Science education literature identifies representational competence as a marker of true conceptual understanding. This paper presents development work for a new assessment instrument designed to measure representational competence with vectors in an engineering mechanics context. We developed the assessment over two successive terms in statics courses at a community college, a medium-sized regional university, and a large state university. We started with twelve multiple-choice questions that survey the vector representations commonly employed in statics. Each question requires the student to interpret and/or use two or more different representations of vectors and requires no calculation beyond single digit integer arithmetic. Distractor answer choices include common student mistakes and misconceptions drawn from the literature and from our teaching experience. We piloted these twelve questions as a timed section of the first exam in fall 2018 statics courses at both Whatcom Community College (WCC) and Western Washington University. Analysis of students’ unprompted use of vector representations on the open-ended problem-solving section of the same exam provides evidence of the assessment’s validity as a measurement instrument for representational competence. We found a positive correlation between students’ accurate and effective use of representations and their score on the multiple choice test. We gathered additional validity evidence by reviewing student responses on an exam wrapper reflection. We used item difficulty and item discrimination scores (point-biserial correlation) to eliminate two questions and revised the remaining questions to improve clarity and discriminatory power. We administered the revised version in two contexts: (1) again as part of the first exam in the winter 2019 Statics course at WCC, and (2) as an extra credit opportunity for statics students at Utah State University. This paper includes sample questions from the assessment to illustrate the approach. The full assessment is available to interested instructors and researchers through an online tool. 
    more » « less
  3. Evans, T ; Marmur, O ; Hunter, J ; Leach, G (Ed.)
    In college, taking algebra can prevent degree completion. One reason for this is that algebra courses in college tend to focus on procedures disconnected from meaning-making (e.g., Goldrick-Rab, 2007). It is critical to connect procedural fluency with conceptual understanding (Kilpatrick, et al., 2001). Several instruments test algebraic proficiency, however, none were designed to test a large body of algebraic conceptions and concepts. We address this gap by developing the Algebra Concept Inventory (ACI), to test college students’ conceptual understanding in algebra. A total of 402 items were developed and tested in eight waves from spring 2019 to fall 2022, administered to 18,234 students enrolled in non-arithmetic based mathematics classes at a large urban community college in the US. Data collection followed a common-item random groups equating design. Retrospective think-aloud interviews were conducted with 135 students to assess construct validity of the items. 2PL IRT models were run on all waves; 63.4% of items (253) have at least moderate, and roughly one-third have high or very high discrimination. In all waves, peak instrument values have excellent reliability ( R ≥ 0.9 ). Convergent validity was explored through the relationship between scores on the ACI and mathematics course level. Students in “mid”-level courses scored on average 0.35 SD higher than those in “low”-level courses; students in “high”-level courses scored on average 0.35 SD higher than those in “mid”-level courses, providing strong evidence of convergent validity. There was no consistent evidence of differential item functioning (DIF) related to examinee characteristics: race/ethnicity, gender, and English-language-learner status. Results suggest that algebraic conceptual understanding, conceptualized by the ACI, is measurable. The final ACI is likely to differentiate between students of various mathematical levels, without conflating characteristics such as race, gender, etc. 
    more » « less
  4. Abstract

    As the use of computers in education increases, adaptive learning platforms are becoming more common. However, these adaptive systems are typically designed to support acquisition of declarative knowledge and/or procedural fluency but rarely address conceptual learning. In this work, we developed the Crystallography Adaptive Learning Module (CALM) for materials science to provide students a tool for individualized conceptual learning. We used a randomized quasi‐experimental design comparing two instructional designs with different levels of computer‐provided direction and student agency. Undergraduate students were randomly assigned to one of two different instructional designs; one design had students complete an individualized, adaptive path using the CALM (N = 80), and the other gave students the freedom to explore CALM's learning resources but with limited guidance (N = 85). Within these two designs, we also investigated students among different cumulative grade point average (GPA) groups. While there was no statistically significant difference in the measure of conceptual understanding between instructional designs or among the groups with the same GPA, there is evidence to suggest the CALM improves conceptual understanding of students in the middle GPA group. Students using CALM also showed increased participation with the interactive learning videos compared to the other design. The number of videos watched in each instructional condition aligns with overall academic performance as the low GPA group received the most assigned supplements but watched the least videos by choice. This study provides insight for technology developers on how to develop educational adaptive technology systems that provide a proper level of student agency to promote conceptual understanding in challenging STEM topics.

     
    more » « less
  5. Previous research on student thinking about experimental measurement and uncertainty has primarily focused on students’ procedural reasoning: Given some data, what should students calculate or do next? This approach, however, cannot tell us what beliefs or conceptual understanding leads to students’ procedural decisions. To explore this relationship, we first need to understand the range of students’ beliefs and conceptual understanding of measurement. In this work, we explored students’ philosophical beliefs about the existence of a true value in experimental measurement. We distributed a survey to students from 12 universities in which we presented two viewpoints on the existence of a true definite position resulting from an experiment, asking participants to indicate which view they agreed with more and asking them to explain their choice. We found that participants, both students and experts, varied in their beliefs about the existence of a true definite position and discussed a range of concepts related to quantum mechanics and the experimental process to explain their answers, regardless of whether or not they agreed with the existence of a true value. From these results, we postulate that students who exhibit similar procedural reasoning may hold widely varying philosophical views about measurement. We recommend that future work investigates this potential relationship and whether and how instruction should attend to these philosophical views in addition to students’ procedural decisions. 
    more » « less