skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Characterizing Student Development Progress: Validating Student Adherence to Project Milestones
As enrollment in CS programs have risen, it has become increasingly difficult for teaching staff to provide timely and detailed guidance on student projects. To address this, instructors use automated assessment tools to evaluate students' code and processes as they work. Even with automation, understanding students' progress, and more importantly, if students are making the 'right' progress toward the solution is challenging at scale. To help students manage their time and learn good software engineering processes, instructors may create intermediate deadlines, or milestones, to support progress. However, student's adherence to these processes is opaque and may hinder student success and instructional support. Better understanding of how students follow process guidance in practice is needed to identify the right assignment structures to support development of high-quality process skills. We use data collected from an automated assessment tool, to calculate a set of 15 progress indicators to investigate which types of progress are being made during four stages of two projects in a CS2 course. These stages are split up by milestones to help guide student activities. We show how looking at which progress indicators are triggered significantly more or less during each stage validates whether students are adhering to the goals of each milestone. We also find students trigger some progress indicators earlier on the second project suggesting improving processes over time.  more » « less
Award ID(s):
1934975
PAR ID:
10387187
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
roceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2022)
Page Range / eLocation ID:
15-21
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Merkle, Larry; Doyle, Maureen; Sheard, Judithe; Soh, Leen-Kiat; Dorn, Brian (Ed.)
    As enrollment in CS programs have risen, it has become increasingly difficult for teaching staff to provide timely and detailed guidance on student projects. To address this, instructors use automated assessment tools to evaluate students’ code and processes as they work. Even with automation, understanding students’ progress, and more importantly, if students are making the ‘right’ progress toward the solution is challenging at scale. To help students manage their time and learn good software engineering processes, instructors may create intermediate deadlines, or milestones, to support progress. However, student’s adherence to these processes is opaque and may hinder student success and instructional support. Better understanding of how students follow process guidance in practice is needed to identify the right assignment structures to support development of high-quality process skills. We use data collected from an automated assessment tool, to calculate a set of 15 progress indicators to investigate which types of progress are being made during four stages of two projects in a CS2 course. These stages are split up by milestones to help guide student activities. We show how looking at which progress indicators are triggered significantly more or less during each stage validates whether students are adhering to the goals of each milestone. We also find students trigger some progress indicators earlier on the second project suggesting improving processes over time. 
    more » « less
  2. This innovative practice work in progress paper presents a systematic approach for screening and aligning service-learning projects that maximize student learning outcomes. We introduce a feasibility assessment model with criteria evaluated through a standardized rubric that guides instructors to critically assess the project fit to help in proactively identifying risks to student outcomes. The rubric serves a dual purpose: guiding the assessment process and prompting discussions with potential project partners. These discussions elicit crucial details about the project scope, potential challenges, and other critical factors. This not only facilitates effective project selection but also allows for necessary adjustments to project parameters, significantly improving the chances of successful student completion. This work builds on the experience accumulated by CCSU's Software Engineering Studio which connects community project partners with teams of 4–5 seniors working on software development projects spanning one or several semesters. Since 2014, the Software Engineering Studio has facilitated over 65 distinct projects and engaged over 550 students. By capturing the lessons learned across a wide range of successful service-learning projects, we show the value of using a feasibility assessment model to evaluate potential projects based on criteria including alignment with course goals, student skill sets, workload manage-ability, educational engagement, and other considerations. The application of this model is illustrated with a case study, which demonstrates how this model helps instructors align projects with academic goals while considering scope, risks, and other critical elements. This example demonstrates how the model facilitates communication with project partners, identifies potential risks, and guides project adjustments to ensure a successful learning experience for students. The approach is transferable to other disciplines with adaptations for project types and student skills. This work contributes to the field of service learning by offering a practical framework for integrating valuable real-world projects into the curriculum while prioritizing student learning outcomes. 
    more » « less
  3. Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers on their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by introducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We find that this model outperforms previously published benchmarks across three different metrics. With this model, we conduct an error analysis to examine characteristics of student responses that may be considered to further improve the method. 
    more » « less
  4. null (Ed.)
    Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers on their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by introducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We find that this model outperforms previouslypublished benchmarks across three different metrics. With this model, we conduct an error analysis to examine characteristics of student responses that may be considered to further improve the method. 
    more » « less
  5. null (Ed.)
    Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers on their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by introducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We find that this model outperforms previously published benchmarks across three different metrics. With this model, we conduct an error analysis to examine characteristics of student responses that may be considered to further improve the method. 
    more » « less