Computing theory is often perceived as challenging by students, and verifying the correctness of a student’s automaton or grammar is time-consuming for instructors. Aiming to provide benefits to both students and instructors, we designed an automated feedback tool for assignments where students construct automata or grammars. Our tool, built as an extension to the widely popular JFLAP software, determines if a submission is correct, and for incorrect submissions it provides a “witness” string demonstrating the incorrectness. We studied the usage and benefits of our tool in two terms, Fall 2019 and Spring 2021. Each term, students in one section of the Introduction to Computer Science Theory course were required to use our tool for sample homework questions targeting DFAs, NFAs, RegExs, CFGs, and PDAs. In Fall 2019, this was a regular section of the course.We also collected comparison data from another section that did not use our tool but had the same instructor and homework assignments. In Spring 2021, a smaller honors section provided the perspective from this demographic. Overall, students who used the tool reported that it helped them to not only solve the homework questions (and they performed better than the comparison group) but also to better understand the underlying theory concept. They were engaged with the tool: almost all persisted with their attempts until their submission was correct despite not being able to random walk to a solution. This indicates that witness feedback, a succinct explanation of incorrectness, is effective. Additionally, it assisted instructors with assignment grading.
more »
« less
Witness Feedback for Introductory CS Theory Assignments
Computing theory analyzes abstract computational models to rigorously study the computational difficulty of various problems. Introductory computing theory can be challenging for undergraduate students, and the overarching goal of our research is to help students learn these computational models. The most common pedagogical tool for interacting with these models is the Java Formal Languages and Automata Package (JFLAP). We developed a JFLAP server extension, which accepts homework submissions from students, evaluates the submission as correct or incorrect, and provides a witness string when the submission is incorrect. Our extension currently provides witness feedback for deterministic finite automata, nondeterministic finite automata, regular expressions, context-free grammars, and pushdown automata. In Fall 2019, we ran a preliminary investigation on two synchronized sections (Control and Study) of the required undergraduate course Introduction to Computer Science Theory. The Study section (n = 29) used our extension for five targeted homework questions, and the Control section (n = 35) submitted these problems using traditional means. The Study section strongly outperformed the Control section with respect to the percent of perfect homework grades for the targeted homework questions. Our most interesting result was student persistence: with only the short witness string as feedback, students voluntarily persisted in submitting attempts until correct.
more »
« less
- Award ID(s):
- 1819546
- PAR ID:
- 10296082
- Date Published:
- Journal Name:
- 52nd ACM Technical Symposium on Computer Science Education (SIGCSE)
- Page Range / eLocation ID:
- 1300 to 1300
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Computer science class enrollments have rapidly risen in the past decade. With current class sizes, standard approaches to grading and providing personalized feedback are no longer possible and new techniques become both feasible and necessary. In this paper, we present the third version of Automata Tutor, a tool for helping teachers and students in large courses on automata and formal languages. The second version of Automata Tutor supported automatic grading and feedback for finite-automata constructions and has already been used by thousands of users in dozens of countries. This new version of Automata Tutor supports automated grading and feedback generation for a greatly extended variety of new problems, including problems that ask students to create regular expressions, context-free grammars, pushdown automata and Turing machines corresponding to a given description, and problems about converting between equivalent models - e.g., from regular expressions to nondeterministic finite automata. Moreover, for several problems, this new version also enables teachers and students to automatically generate new problem instances. We also present the results of a survey run on a class of 950 students, which shows very positive results about the usability and usefulness of the tool.more » « less
-
Step-based tutoring systems are known to be more effective than traditional answer-based systems. They however require that each step in a student’s work be accepted and evaluated automatically to provide effective feedback. In the domain of linear circuit analysis, it is frequently necessary to allow students to draw or edit circuits on their screen to simplify or otherwise transform them. Here, the interface developed to accept such input and provide immediate feedback in the Circuit Tutor system is described, along with systematic assessment data. Advanced simplification methods such as removing circuit sections that are removably hinged, voltage-splittable, or current-splittable are taught to students in an interactive tutorial and then supported in the circuit editor itself. To address the learning curve associated with such an interface, ~70 video tutorials were created to demonstrate exactly how to work the randomly generated problems at each level of each of the tutorials in the system. A complete written record or “transcript” of student’s work in the system is being made available, showing both incorrect and correct steps. Introductory interactive (multiple choice) tutorials are now included on most topics. Assessment of exercises using the interactive editor was carried out by professional evaluators for several institutions, including three that heavily serve underrepresented minorities. Both quantitative and qualitative methods were used, including focus groups, surveys, and interviews. Controlled, randomized, blind evaluations were carried out in three different course sections in Spring and Fall 2019 to evaluate three tutorials using the interactive editor, comparing use of Circuit Tutor to both a commercial answer-based system and to conventional textbook-based paper homework. In Fall 2019, students rated the software a mean of 4.14/5 for being helpful to learn the material vs. 3.05/5 for paper homework (HW), p < 0.001 and effect size d = 1.11σ. On relevant exam questions that semester, students scored significantly (p = 0.014) higher with an effect size of d = 0.64σ when using Circuit Tutor compared to paper HW in one class section, with no significant difference in the other section.more » « less
-
In online or large in-person course sections, instructors often adopt an online homework tool to alleviate the burden of grading. While these systems can quickly tell students whether they got a problem correct for a multiple-choice or numeric answer, they are unable to provide feedback on students’ free body diagrams. As the process of sketching a free body diagram correctly is a foundational skill to solving engineering problems, the loss of feedback to the students in this area is a detriment to students. To address the need for rapid feedback on students’ free body diagram sketching, the research team developed an online, sketch-recognition system called Mechanix. This system allows students to sketch free body diagrams, including for trusses, and receive instant feedback on their sketches. The sketching feedback is ungraded. After the students have a correct sketch, they are then able to enter in the numeric answers for the problem and submit those for a grade. Thereby, the platform offers the grading convenience of other online homework systems but also helps the students develop their free body diagram sketching skills. To assess the efficacy of this experimental system, standard concept inventories were administered pre- and post-semester for both experimental and control groups. The unfamiliarity or difficulty of some advanced problems in the Statics Concept Inventory, however, appeared to discourage students, and many would stop putting in any effort after a few problems that were especially challenging to solve. This effect was especially pronounced with the Construction majors versus the Mechanical Engineering majors in the test group. To address this tendency and therefore collect more complete pre- and post-semester concept inventory data, the research group worked on reordering the Statics Concept Inventory questions from more familiar to more challenging, based upon the past performance of the initial students taking the survey. This paper describes the process and results of the effort to reorder this instrument in order to increase Construction student participation and, therefore, the researchers’ ability to measure the impact of the Mechanix system.more » « less
-
In online or large in-person course sections, instructors often adopt an online homework tool to alleviate the burden of grading. While these systems can quickly tell students whether they got a problem correct for a multiple-choice or numeric answer, they are unable to provide feedback on students’ free body diagrams. As the process of sketching a free body diagram correctly is a foundational skill to solving engineering problems, the loss of feedback to the students in this area is a detriment to students. To address the need for rapid feedback on students’ free body diagram sketching, the research team developed an online, sketch-recognition system called Mechanix. This system allows students to sketch free body diagrams, including for trusses, and receive instant feedback on their sketches. The sketching feedback is ungraded. After the students have a correct sketch, they are then able to enter in the numeric answers for the problem and submit those for a grade. Thereby, the platform offers the grading convenience of other online homework systems but also helps the students develop their free body diagram sketching skills. To assess the efficacy of this experimental system, standard concept inventories were administered pre- and post-semester for both experimental and control groups. The unfamiliarity or difficulty of some advanced problems in the Statics Concept Inventory, however, appeared to discourage students, and many would stop putting in any effort after a few problems that were especially challenging to solve. This effect was especially pronounced with the Construction majors versus the Mechanical Engineering majors in the test group. To address this tendency and therefore collect more complete pre- and post-semester concept inventory data, the research group worked on reordering the Statics Concept Inventory questions from more familiar to more challenging, based upon the past performance of the initial students taking the survey. This paper describes the process and results of the effort to reorder this instrument in order to increase Construction student participation and, therefore, the researchers’ ability to measure the impact of the Mechanix system.more » « less
An official website of the United States government

