This innovative practice WIP paper describes our ongoing development and deployment of an online robotics education platform that highlighted a gap in providing an interactive, feedback-rich learning environment essential for mastering pro-gramming concepts in robotics, which they were not getting with the traditional code→ simulate→turn-in workflow. Since teaching resources are limited, students would benefit from feedback in real-time to find and fix their mistakes in the programming assignments. To integrate such automated feedback, this paper will focus on creating a system for unit testing while integrating it into the course workflow. We facilitate this real-time feedback by including unit testing in the design of programming assignments so students can understand and fix their errors on their own and without the prior help of instructors/TAs serving as a bottleneck. In line with the framework's personalized student-centered approach, this method makes it easier for students to revise and debug their programming work, encouraging hands-on learning. The updated course workflow, which includes unit tests, will strengthen the learning environment and make it more interactive so that students can learn how to program robots in a self-guided fashion. 
                        more » 
                        « less   
                    
                            
                            SnapCheck: Automated Testing for Snap! Programs
                        
                    
    
            Programming environments such as Snap, Scratch, and Processing engage learners by allowing them to create programming artifacts such as apps and games, with visual and interactive output. Learning programming with such a media-focused context has been shown to increase retention and success rate. However, assessing these visual, interactive projects requires time and laborious manual effort, and it is therefore difficult to offer automated or real-time feedback to students as they work. In this paper, we introduce SnapCheck, a dynamic testing framework for Snap that enables instructors to author test cases with Condition-Action templates. The goal of SnapCheck is to allow instructors or researchers to author property-based test cases that can automatically assess students' interactive programs with high accuracy. Our evaluation of SnapCheck on 162 code snapshots from a Pong game assignment in an introductory programming course shows that our automated testing framework achieves at least 98% accuracy over all rubric items, showing potentials to use SnapCheck for auto-grading and providing formative feedback to students. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1917885
- PAR ID:
- 10279565
- Date Published:
- Journal Name:
- ITiCSE '21: Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education
- Page Range / eLocation ID:
- 227 to 233
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Peer assessment, as a form of collaborative learning, can engage students in active learning and improve their learning gains. However, current teaching platforms and programming environments provide little support to integrate peer assessment for in-class programming exercises. We identified challenges in conducting such exercises and adopting peer assessment through formative interviews with instructors of introductory programming courses. To address these challenges, we introduce PuzzleMe, a tool to help Computer Science instructors to conduct engaging in-class programming exercises. PuzzleMe leverages peer assessment to support a collaboration model where students provide timely feedback on their peers' work. We propose two assessment techniques tailored to in-class programming exercises: live peer testing and live peer code review. Live peer testing can improve students' code robustness by allowing them to create and share lightweight tests with peers. Live peer code review can improve code understanding by intelligently grouping students to maximize meaningful code reviews. A two-week deployment study revealed that PuzzleMe encourages students to write useful test cases, identify code problems, correct misunderstandings, and learn a diverse set of problem-solving approaches from peers.more » « less
- 
            null (Ed.)The feedback provided by current testing education tools about the deficiencies in a student’s test suite either mimics industry code coverage tools or lists specific instructor test cases that are missing from the student’s test suite. While useful in some sense, these types of feedback are akin to revealing the solution to the problem, which can inadvertently encourage students to pursue a trial-and-error approach to testing, rather than using a more systematic approach that encourages learning. In addition to not teaching students why their test suite is inadequate, this type of feedback may motivate students to become dependent on the feedback rather than thinking for themselves. To address this deficiency, there is an opportunity to investigate alternative feedback mechanisms that include a positive reinforcement of testing concepts. We argue that using an inquiry-based learning approach is better than simply providing the answers. To facilitate this type of learning, we present Testing Tutor, a web-based assignment submission platform that supports different levels of testing pedagogy via a customizable feedback engine. We evaluated the impact of the different types of feedback through an empirical study in two sophomore-level courses.We use Testing Tutor to provide students with different types of feedback, either traditional detailed code coverage feedback or inquiry-based learning conceptual feedback, and compare the effects. The results show that students that receive conceptual feedback had higher code coverage (by different measures), fewer redundant test cases, and higher programming grades than the students who receive traditional code coverage feedback.more » « less
- 
            When instructors want to design programming assignments to motivate their students, a common design choice is to have those students write code to make an artifact (e.g. apps, games, music, or images). The goal of this study is to understand the impacts of including artifact creation in a programming assignment on students’ motivation, time on task, and cognitive load. To do so, we conducted a controlled lab study with seventy-three students from an introductory engineering course. The experimental group created a simulation they could interact with – thus having the full experience of artifact creation – while the control group wrote the exact same code, but evaluated it only with test cases. We hypothesized that students who could interact with the simulation they were programming would be more motivated to complete the assignment and report higher intrinsic motivation. However, we found no significant difference in motivation or cognitive load between the groups. Additionally, the experimental group spent more time completing the assignment than the control group. Our results suggest that artifact creation may not be necessary for motivating students in all contexts, and that artifact creation may have other effects such as increased time on task. Additionally, instructors and researchers should consider when, and in what contexts, artifact creation is beneficial and when it may not bemore » « less
- 
            Abstract: The paper introduces a visual programming language and corresponding web- and cloud-based development environment called NetsBlox. NetsBlox is an extension of Snap! and it builds upon its visual formalism as well as its open source code base. NetsBlox adds distributed programming capabilities to Snap! by introducing two simple abstractions: messages and NetsBlox services. Messages containing data can be exchanged by two or more NetsBlox programs running on different computers connected to the Internet. Services are called on a client program and are executed on the NetsBlox server. These two abstractions make it possible to create distributed programs, for example multi-player games or client-server applications. We believe that NetsBlox provides increased motivation to high-school students to become creators and not just consumers of technology. At the same time, it helps teach them basic distributed programming concepts.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    