skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Reevaluating the relationship between explaining, tracing, and writing skills in CS1 in a replication study
Background and Context Lopez and Lister first presented evidence for a skill hierarchy of code reading, tracing, and writing for introductory programming students. Further support for this hierarchy could help computer science educators sequence course content to best build student programming skill. Objective This study aims to replicate a slightly simplified hierarchy of skills in CS1 using a larger body of students (600+ vs. 38) in a non-major introductory Python course with computer-based exams. We also explore the validity of other possible hierarchies. Method We collected student score data on 4 kinds of exam questions. Structural equation modeling was used to derive the hierarchy for each exam. Findings We find multiple best-fitting structural models. The original hierarchy does not appear among the “best” candidates, but similar models do. We also determined that our methods provide us with correlations between skills and do not answer a more fundamental question: what is the ideal teaching order for these skills? Implications This modeling work is valuable for understanding the possible correlations between fundamental code-related skills. However, analyzing student performance on these skills at a moment in time is not sufficient to determine teaching order. We present possible study designs for exploring this more actionable research question.  more » « less
Award ID(s):
2121424
PAR ID:
10340072
Author(s) / Creator(s):
; ; ; ; ;
Editor(s):
Dorn, Brian; Vahrenhold, Jan
Date Published:
Journal Name:
Computer Science Education
ISSN:
0899-3408
Page Range / eLocation ID:
1 to 29
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. NA (Ed.)
    We conducted two studies to investigate the extent to which brief, spaced, mastery practice on skills relevant to introductory physics affects student performance. The first study investigated the effect of practice of “specific” physics skills, each one relevant to only one or a few items on the course exam. This study employed a quasiexperimental design with 766 students assigned to “intervention” or “control” conditions by lecture section sharing common exams. Results of the first study indicate significant improvement in the performance for only some of the exam items relevant to the specific skills practiced. We also observed between-section performance differences on other exam items not relevant to training, which may be due to specific prior quiz items from individual instructors. The second study investigated the effect of practice on the “general” skill of algebra relevant to introductory physics, a skill which was relevant to most of the exam items. This study employed a similar quasiexperimental design with 363 students assigned to treatment or control conditions, and we also administered a reliable pre- and post-test assessment of the algebra skills that was iteratively developed for this project. Results from the second study indicate that 75% of students had high accuracy on the algebra pretest. Students in the control condition who scored low on the pretest gained about 0.7 standard deviations on the post-test, presumably from engagement with the course alone, and students in the algebra practice condition had statistically similar gains, indicating no observed effect of algebra practice on algebra pre- to post-test gains. In contrast, we find some potential evidence that the algebra practice improved final exam performance for students with high pretest scores and did not benefit students with low pretest scores, although this result is inconclusive: the point estimate of the effect size was 0.24 for high pretest scoring students, but the 95% confidence interval [ 0.01 , 0.48] slightly overlapped with zero. Further, we find a statistically significant positive effect of algebra practice on exam items that have higher algebraic complexity and no effect for items with low complexity. One possible explanation for the added benefit of algebra practice for high-scoring students is fluency in algebra skills may have improved. Overall, our observations provide some evidence that spaced, mastery practice is beneficial for exam performance for specific and general skills, and that students who are better prepared in algebra may be especially benefitting from mastery practice in relevant algebra skills in terms of improved final exam performance. 
    more » « less
  2. Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, open-ended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an open-ended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group - these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group - these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common open-ended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version had two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models. 
    more » « less
  3. Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, open-ended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an open-ended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group - these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group - these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common open-ended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version had two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models. 
    more » « less
  4. Preparing for high-stakes exams in introductory physics courses is generally a self-regulated activity. Compared to other exam reviewing strategies, doing practice exams has been shown to help students recognize gaps in their knowledge, encourage active practicing, and produce long-term retention. However, many students, particularly students who are struggling with the course material, are not guided by research-based study strategies and do not use practice exams effectively. Using data collected from a fully online course in Spring 2021, this study examines two interventions aimed at improving student selfregulated studying behaviors and enhancing student metacognition during exam preparation. We found that a modified format of online practice exams with one attempt per question and delayed feedback, increases the accuracy of feedback about student readiness for exams but does not change the accuracy of their predicted exam scores or studying behaviors. Additionally, an added mock exam one week before the actual exam impacts students’ intentions for studying but does not impact actual study behaviors or facilitate metacognition. These results suggest that interventions designed to improve exam preparation likely need to include explicit instruction on study strategies and student beliefs about learning. 
    more » « less
  5. Henderson, Charles (Ed.)
    Preparing for high-stakes exams in introductory physics courses is generally a self-regulated activity. Compared to other exam reviewing strategies, doing practice exams has been shown to help students recognize gaps in their knowledge, encourage active practicing, and produce long-term retention. However, many students, particularly students who are struggling with the course material, are not guided by research-based study strategies and do not use practice exams effectively. Using data collected from a fully online course in Spring 2021, this study examines two interventions aimed at improving student selfregulated studying behaviors and enhancing student metacognition during exam preparation. We found that a modified format of online practice exams with one attempt per question and delayed feedback, increases the accuracy of feedback about student readiness for exams but does not change the accuracy of their predicted exam scores or studying behaviors. Additionally, an added mock exam one week before the actual exam impacts students’ intentions for studying but does not impact actual study behaviors or facilitate metacognition. These results suggest that interventions designed to improve exam preparation likely need to include explicit instruction on study strategies and student beliefs about learning. 
    more » « less