Computing theory is often perceived as challenging by students, and verifying the correctness of a student’s automaton or grammar is time-consuming for instructors. Aiming to provide benefits to both students and instructors, we designed an automated feedback tool for assignments where students construct automata or grammars. Our tool, built as an extension to the widely popular JFLAP software, determines if a submission is correct, and for incorrect submissions it provides a “witness” string demonstrating the incorrectness. We studied the usage and benefits of our tool in two terms, Fall 2019 and Spring 2021. Each term, students in one section of the Introduction to Computer Science Theory course were required to use our tool for sample homework questions targeting DFAs, NFAs, RegExs, CFGs, and PDAs. In Fall 2019, this was a regular section of the course.We also collected comparison data from another section that did not use our tool but had the same instructor and homework assignments. In Spring 2021, a smaller honors section provided the perspective from this demographic. Overall, students who used the tool reported that it helped them to not only solve the homework questions (and they performed better than the comparison group) but also to better understand the underlying theory concept. They were engaged with the tool: almost all persisted with their attempts until their submission was correct despite not being able to random walk to a solution. This indicates that witness feedback, a succinct explanation of incorrectness, is effective. Additionally, it assisted instructors with assignment grading. 
                        more » 
                        « less   
                    
                            
                            Interactive Editing of Circuits in a Step-Based Tutoring System
                        
                    
    
            Step-based tutoring systems are known to be more effective than traditional answer-based systems. They however require that each step in a student’s work be accepted and evaluated automatically to provide effective feedback. In the domain of linear circuit analysis, it is frequently necessary to allow students to draw or edit circuits on their screen to simplify or otherwise transform them. Here, the interface developed to accept such input and provide immediate feedback in the Circuit Tutor system is described, along with systematic assessment data. Advanced simplification methods such as removing circuit sections that are removably hinged, voltage-splittable, or current-splittable are taught to students in an interactive tutorial and then supported in the circuit editor itself. To address the learning curve associated with such an interface, ~70 video tutorials were created to demonstrate exactly how to work the randomly generated problems at each level of each of the tutorials in the system. A complete written record or “transcript” of student’s work in the system is being made available, showing both incorrect and correct steps. Introductory interactive (multiple choice) tutorials are now included on most topics. Assessment of exercises using the interactive editor was carried out by professional evaluators for several institutions, including three that heavily serve underrepresented minorities. Both quantitative and qualitative methods were used, including focus groups, surveys, and interviews. Controlled, randomized, blind evaluations were carried out in three different course sections in Spring and Fall 2019 to evaluate three tutorials using the interactive editor, comparing use of Circuit Tutor to both a commercial answer-based system and to conventional textbook-based paper homework. In Fall 2019, students rated the software a mean of 4.14/5 for being helpful to learn the material vs. 3.05/5 for paper homework (HW), p < 0.001 and effect size d = 1.11σ. On relevant exam questions that semester, students scored significantly (p = 0.014) higher with an effect size of d = 0.64σ when using Circuit Tutor compared to paper HW in one class section, with no significant difference in the other section. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1821628
- PAR ID:
- 10179925
- Date Published:
- Journal Name:
- American Society for Engineering Education Annual Conference
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Step-based tutoring consists in breaking down complicated problem-solving procedures into individual steps whose inputs can be immediately evaluated to promote effective student learning. Here, recent progress on the extension of a step-based tutoring for linear circuit analysis to cover new topics requiring complex, multi-step solution procedures is described. These topics include first and second-order transient problems solved using classical differential equation approaches. Students use an interactive circuit editor to modify the circuit appropriately for each step of the analysis, followed by writing and solving equations using methods of their choice as appropriate. Initial work on Laplace transform-based circuit analysis is also discussed. Detailed feedback is supplied at each step along with fully worked examples, supporting introductory multiple-choice tutorials and YouTube videos, and a full record of the student's work is created in a PDF document for later study and review. Further, results of a comprehensive independent evaluation involving both quantitative and qualitative analysis and users across four participating institutions are discussed. Overall, students had very favorable experiences using the step-based system across Fall 2020 and Spring 2021. At least 48% of students in the Fall 2020 semester and 60% of students in the Spring 2021 semester agreed or strongly agreed with all survey questions about positive features of the system. Those who had used the step-based system and the commercial MasteringEngineering system preferred the former by 69% to 12% margins in surveys. Instructors were further surveyed and 86% would recommend the system to others.more » « less
- 
            Step-based tutoring consists in breaking down complicated problem-solving procedures into individual steps whose inputs can be immediately evaluated to promote effective student learning. Here, recent progress on the extension of a step-based tutoring for linear circuit analysis to cover new topics requiring complex, multi-step solution procedures is described. These topics include first and second-order transient problems solved using classical differential equation approaches. Students use an interactive circuit editor to modify the circuit appropriately for each step of the analysis, followed by writing and solving equations using methods of their choice as appropriate. Initial work on Laplace transform-based circuit analysis is also discussed. Detailed feedback is supplied at each step along with fully worked examples, supporting introductory multiple-choice tutorials and YouTube videos, and a full record of the student's work is created in a PDF document for later study and review. Further, results of a comprehensive independent evaluation involving both quantitative and qualitative analysis and users across four participating institutions are discussed. Overall, students had very favorable experiences using the step-based system across Fall 2020 and Spring 2021. At least 48% of students in the Fall 2020 semester and 60% of students in the Spring 2021 semester agreed or strongly agreed with all survey questions about positive features of the system. Those who had used the step-based system and the commercial MasteringEngineering system preferred the former by 69% to 12% margins in surveys. Instructors were further surveyed and 86% would recommend the system to others.more » « less
- 
            null (Ed.)Computing theory analyzes abstract computational models to rigorously study the computational difficulty of various problems. Introductory computing theory can be challenging for undergraduate students, and the overarching goal of our research is to help students learn these computational models. The most common pedagogical tool for interacting with these models is the Java Formal Languages and Automata Package (JFLAP). We developed a JFLAP server extension, which accepts homework submissions from students, evaluates the submission as correct or incorrect, and provides a witness string when the submission is incorrect. Our extension currently provides witness feedback for deterministic finite automata, nondeterministic finite automata, regular expressions, context-free grammars, and pushdown automata. In Fall 2019, we ran a preliminary investigation on two synchronized sections (Control and Study) of the required undergraduate course Introduction to Computer Science Theory. The Study section (n = 29) used our extension for five targeted homework questions, and the Control section (n = 35) submitted these problems using traditional means. The Study section strongly outperformed the Control section with respect to the percent of perfect homework grades for the targeted homework questions. Our most interesting result was student persistence: with only the short witness string as feedback, students voluntarily persisted in submitting attempts until correct.more » « less
- 
            Olney, AM; Chounta, IA; Liu, Z; Santos, OC; Bittencourt, II (Ed.)This work investigates how tutoring discourse interacts with students’ proximal knowledge to explain and predict students’ learning outcomes. Our work is conducted in the context of high-dosage human tutoring where 9th-grade students attended small group tutorials and individually practiced problems on an Intelligent Tutoring System (ITS). We analyzed whether tutors’ talk moves and students’ performance on the ITS predicted scores on math learning assessments. We trained Random Forest Classifiers (RFCs) to distinguish high and low assessment scores based on tutor talk moves, student’s ITS performance metrics, and their combination. A decision tree was extracted from each RFC to yield an interpretable model. We found AUCs of 0.63 for talk moves, 0.66 for ITS, and 0.77 for their combination, suggesting interactivity among the two feature sources. Specifically, the best decision tree emerged from combining the tutor talk moves that encouraged rigorous thinking and students’ ITS mastery. In essence, tutor talk that encouraged mathematical reasoning predicted achievement for students who demonstrated high mastery on the ITS, whereas tutors’ revoicing of students’ mathematical ideas and contributions was predictive for students with low ITS mastery. Implications for practice are discussed.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    