skip to main content


Title: Characterizing Student Development Progress: Validating Student Adherence to Project Milestones
As enrollment in CS programs have risen, it has become increasingly difficult for teaching staff to provide timely and detailed guidance on student projects. To address this, instructors use automated assessment tools to evaluate students’ code and processes as they work. Even with automation, understanding students’ progress, and more importantly, if students are making the ‘right’ progress toward the solution is challenging at scale. To help students manage their time and learn good software engineering processes, instructors may create intermediate deadlines, or milestones, to support progress. However, student’s adherence to these processes is opaque and may hinder student success and instructional support. Better understanding of how students follow process guidance in practice is needed to identify the right assignment structures to support development of high-quality process skills. We use data collected from an automated assessment tool, to calculate a set of 15 progress indicators to investigate which types of progress are being made during four stages of two projects in a CS2 course. These stages are split up by milestones to help guide student activities. We show how looking at which progress indicators are triggered significantly more or less during each stage validates whether students are adhering to the goals of each milestone. We also find students trigger some progress indicators earlier on the second project suggesting improving processes over time.  more » « less
Award ID(s):
1821475
NSF-PAR ID:
10392593
Author(s) / Creator(s):
; ;
Editor(s):
Merkle, Larry; Doyle, Maureen; Sheard, Judithe; Soh, Leen-Kiat; Dorn, Brian
Date Published:
Journal Name:
Proceedings of the 53rd ACM Technical Symposium on Com- puter Science Education
Page Range / eLocation ID:
15-21
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As enrollment in CS programs have risen, it has become increasingly difficult for teaching staff to provide timely and detailed guidance on student projects. To address this, instructors use automated assessment tools to evaluate students' code and processes as they work. Even with automation, understanding students' progress, and more importantly, if students are making the 'right' progress toward the solution is challenging at scale. To help students manage their time and learn good software engineering processes, instructors may create intermediate deadlines, or milestones, to support progress. However, student's adherence to these processes is opaque and may hinder student success and instructional support. Better understanding of how students follow process guidance in practice is needed to identify the right assignment structures to support development of high-quality process skills. We use data collected from an automated assessment tool, to calculate a set of 15 progress indicators to investigate which types of progress are being made during four stages of two projects in a CS2 course. These stages are split up by milestones to help guide student activities. We show how looking at which progress indicators are triggered significantly more or less during each stage validates whether students are adhering to the goals of each milestone. We also find students trigger some progress indicators earlier on the second project suggesting improving processes over time. 
    more » « less
  2. This is a research study that investigates the range of conceptions of prototyping in engineering design courses through exploring the conceptions and implementations from the instructors’ perspective. Prototyping is certainly an activity central to engineering design. The context of prototyping to support engineering education and practice has a range of implementations in an undergraduate engineering curriculum, from first-year engineering to capstone engineering design experiences. Understanding faculty conceptions’ of the reason, purpose, and place of prototyping can help illustrate how teaching and learning of the engineering design process is realistically implemented across a curriculum and how students are prepared for work practice. We seek to understand, and consequently improve, engineering design teaching and learning, through transformations of practice that are based on engineering education research. In this exploratory study, we interviewed three faculty members who teach engineering design in project-based learning courses across the curriculum of an undergraduate engineering program. This builds on related work done by the authors that previously investigated undergraduate engineering students’ conceptions of prototyping activities and process. With our instructor participants, a similar interview protocol was followed through semi-structured qualitative interviews. Data analysis has been undertaken through an emerging thematic analysis of these interview transcripts. Early findings characterize the focus on teaching the design process; the kind of feedback that the educators provide on students’ prototypes; students’ behavior while working on design projects; and educators’ perspectives on the design course. Understanding faculty conceptions with students’ conceptions of prototyping can shed light on the efficacy of using prototyping as an authentic experience in design teaching and learning. In project-based learning courses, particular issues of authenticity and assessment are under consideration, especially across the curriculum. More specifically, “proportions of problems” inform “problem solving” as one of the key characteristics in design thinking, teaching and learning. More attention to prototyping as part of the study of problem-solving processes can be useful to enhance understanding of the impact of instructional design. Challenges for teaching engineering design exist, and may be due to difficulties in framing design problems, recognizing what expertise students possess, and assessing their expertise to help them reach their goals, all at an appropriate place and ambiguity with student learning goals. Initial findings show that prototyping activities can help students become more reflective on their design. Scaffolded activities in prototyping can support self-regulated learning by students. The range of support and facilities, such as campus makerspaces, may also help students and instructors alike develop industry-ready engineering students. 
    more » « less
  3. Abstract Background

    Teachers often rely on the use of open‐ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through more close‐ended problem types. While these types of problems are valuable to teachers, the variation in student responses to these questions makes it difficult, and time‐consuming, to evaluate and provide directed feedback. It is a well‐studied concept that feedback, both in terms of a numeric score but more importantly in the form of teacher‐authored comments, can help guide students as to how to improve, leading to increased learning. It is for this reason that teachers need better support not only for assessing students' work but also in providing meaningful and directed feedback to students.

    Objectives

    In this paper, we seek to develop, evaluate, and examine machine learning models that support automated open response assessment and feedback.

    Methods

    We build upon the prior research in the automatic assessment of student responses to open‐ended problems and introduce a novel approach that leverages student log data combined with machine learning and natural language processing methods. Utilizing sentence‐level semantic representations of student responses to open‐ended questions, we propose a collaborative filtering‐based approach to both predict student scores as well as recommend appropriate feedback messages for teachers to send to their students.

    Results and Conclusion

    We find that our method outperforms previously published benchmarks across three different metrics for the task of predicting student performance. Through an error analysis, we identify several areas where future works may be able to improve upon our approach.

     
    more » « less
  4. Nigel Bosch ; Antonija Mitrovic ; Agathe Merceron (Ed.)
    Demand for education in Computer Science has increased markedly in recent years. With increased demand has come to an increased need for student support, especially for courses with large programming projects. Instructors commonly provide online post forums or office hours to address this massive demand for help requests. Identifying what types of questions students are asking in those interactions and what triggers their help requests can in turn assist instructors in better managing limited help-providing resources. In this study, we aim to explore students’ help-seeking actions from the two separate approaches we mentioned before and investigate their coding actions before help requests to understand better what motivates students to seek help in programming projects. We collected students’ help request data and commit logs from two Fall offerings of a CS2 course. In our analysis, we first believe that different types of questions should be related to different behavioral patterns. Therefore, we first categorized students’ help requests based on their content (e.g., Implementation, General Debugging, or Addressing Teaching Staff (TS) Test Failures). We found that General Debugging is the most frequently asked question. Then we analyzed how the popularity of each type of request changed over time. Our results suggest that implementation is more popular in the early stage of the project cycle, and it changes to General Debugging and Addressing TS Failures in the later stage. We also calculated the accuracy of students’ commit frequency one hour before their help requests; the results show that before Implementation requests, the commit frequency is significantly lower, and before TS failure requests, the frequency is significantly higher. Moreover, we checked before any help request whether students changed their source code or test code. The results show implementation requests related to higher chances of source code changes and coverage questions related to more test code changes. Moreover, we use a Markov Chain model to show students’ action sequences before, during, and after the requests. And finally, we explored students’ progress after the office hours interaction and found that over half of the students improved the correctness of their code after 20 minutes of their office hours interaction addressing TS failures ends. 
    more » « less
  5. Background: Teachers often rely on the use of open‐ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through more close‐ended problem types. While these types of problems are valuable to teachers, the variation in student responses to these questions makes it difficult, and time‐consuming, to evaluate and provide directed feedback. It is a well‐studied concept that feedback, both in terms of a numeric score but more importantly in the form of teacher‐authored comments, can help guide students as to how to improve, leading to increased learning. It is for this reason that teachers need better support not only for assessing students' work but also in providing meaningful and directed feedback to students. Objectives: In this paper, we seek to develop, evaluate, and examine machine learning models that support automated open response assessment and feedback. Methods: We build upon the prior research in the automatic assessment of student responses to open‐ended problems and introduce a novel approach that leverages student log data combined with machine learning and natural language processing methods. Utilizing sentence‐level semantic representations of student responses to open‐ended questions, we propose a collaborative filtering‐based approach to both predict student scores as well as recommend appropriate feedback messages for teachers to send to their students. Results and Conclusion: We find that our method outperforms previously published benchmarks across three different metrics for the task of predicting student performance. Through an error analysis, we identify several areas where future works maybe able to improve upon our approach. 
    more » « less