We analyze teachers’ written feedback to students in an online learning environment, specifically a setting in which high school students in Uruguay are learning English as a foreign language. How complex should teachers’ feedback be? Should it be adapted to each student’s English profi- ciency level? How does teacher feedback affect the probability of engaging the student in a conversation? To explore these questions, we conducted both parametric (multilevel modeling) and non-parametric (bootstrapping) analyses of 27,627 messages exchanged between 35 teachers and 1074 students in 2017 and 2018. Our results suggest: (1) Teach- ers should adapt their feedback complexity to their students’ English proficiency level. Students who receive feedback that is too complex or too basic for their level post 13- 15% fewer comments than those who receive adapted feed- back. (2) Feedback that includes a question is associated with higher odds-ratio (17.5-19) of engaging the student in conversation. (3) For students with low English proficiency, slow turnaround (feedback after 1 week) reduces this odds ratio by 0.7. These results have potential implications for online platforms offering foreign language learning services, in which it is crucial to give the best possible learning expe- rience while judiciously allocating teachers’ time.
more »
« less
A Comparison of Inquiry-Based Conceptual Feedback vs. Traditional Detailed Feedback Mechanisms in Software Testing Education: An Empirical Investigation
The feedback provided by current testing education tools about the deficiencies in a student’s test suite either mimics industry code coverage tools or lists specific instructor test cases that are missing from the student’s test suite. While useful in some sense, these types of feedback are akin to revealing the solution to the problem, which can inadvertently encourage students to pursue a trial-and-error approach to testing, rather than using a more systematic approach that encourages learning. In addition to not teaching students why their test suite is inadequate, this type of feedback may motivate students to become dependent on the feedback rather than thinking for themselves. To address this deficiency, there is an opportunity to investigate alternative feedback mechanisms that include a positive reinforcement of testing concepts. We argue that using an inquiry-based learning approach is better than simply providing the answers. To facilitate this type of learning, we present Testing Tutor, a web-based assignment submission platform that supports different levels of testing pedagogy via a customizable feedback engine. We evaluated the impact of the different types of feedback through an empirical study in two sophomore-level courses.We use Testing Tutor to provide students with different types of feedback, either traditional detailed code coverage feedback or inquiry-based learning conceptual feedback, and compare the effects. The results show that students that receive conceptual feedback had higher code coverage (by different measures), fewer redundant test cases, and higher programming grades than the students who receive traditional code coverage feedback.
more »
« less
- Award ID(s):
- 2013296
- PAR ID:
- 10298372
- Date Published:
- Journal Name:
- Proceedings of the 52nd ACM Technical Symposium on Computer Science Education
- Page Range / eLocation ID:
- 87 to 93
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Students often get stuck when programming independently, and need help to progress. Existing, automated feedback can help students progress, but it is unclear whether it ultimately leads to learning. We present Step Tutor, which helps struggling students during programming by presenting them with relevant, step-by-step examples. The goal of Step Tutor is to help students progress, and engage them in comparison, reflection, and learning. When a student requests help, Step Tutor adaptively selects an example to demonstrate the next meaningful step in the solution. It engages the student in comparing "before" and "after" code snapshots, and their corresponding visual output, and guides them to reflect on the changes. Step Tutor is a novel form of help that combines effective aspects of existing support features, such as hints and Worked Examples, to help students both progress and learn. To understand how students use Step Tutor, we asked nine undergraduate students to complete two programming tasks, with its help, and interviewed them about their experience. We present our qualitative analysis of students' experience, which shows us why and how they seek help from Step Tutor, and Step Tutor's affordances. These initial results suggest that students perceived that Step Tutor accomplished its goals of helping them to progress and learn.more » « less
-
We conducted a controlled study to investigate whether having students choose the concept on which to solve each practice problem in an adaptive tutor helped improve learning. We analyzed data from an adaptive tutor used by introductory programming students over three semesters. The tutor presented code-tracing problems, used pretest-practice-post-test protocol, and presented line-by-line explanation of the correct solution as feedback. We found that choice did not in-crease the amount of learning or pace of learning. But, it resulted in greater improvement in score on learned concepts, and the effect size was medium.more » « less
-
To better prepare future generations, knowledge about computers and programming are one of the many skills that are part of almost all Science, Technology, Engineering, and Mathematic programs; however, teaching and learning programming is a complex task that is generally considered difficult by students and teachers alike. One approach to engage and inspire students from a variety of backgrounds is the use of educational robots. Unfortunately, previous research presents mixed results on the effectiveness of educational robots on student learning. One possibility for this lack of clarity may be because students have a wide variety of styles of learning. It is possible that the use of kinesthetic feedback, in addition to the normally used visual feedback, may improve learning with educational robots by providing a richer, multi-modal experience that may appeal to a larger number of students with different learning styles. It is also possible, however, that the addition of kinesthetic feedback, and how it may interfere with the visual feedback, may decrease a student’s ability to interpret the program commands being executed by a robot, which is critical for program debugging. In this work, we investigated whether human participants were able to accurately determine a sequence of program commands performed by a robot when both kinesthetic and visual feedback were being used together. Command recall and end point location determination were compared to the typically used visual-only method, as well as a narrative description. Results from 10 sighted participants indicated that individuals were able to accurately determine a sequence of movement commands and their magnitude when using combined kinesthetic + visual feedback. Participants’ recall accuracy of program commands was actually better with kinesthetic + visual feedback than just visual feedback. Although the recall accuracy was even better with the narrative description, this was primarily due to participants confusing an absolute rotation command with a relative rotation command with the kinesthetic + visual feedback. Participants’ zone location accuracy of the end point after a command was executed was significantly better for both the kinesthetic + visual feedback and narrative methods compared to the visual-only method. Together, these results suggest that the use of both kinesthetic + visual feedback improves an individual’s ability to interpret program commands, rather than decreases it.more » « less
-
Software testing is an essential skill for computer science students. Prior work reports that students desire support in determining what code to test and which scenarios should be tested. In response to this, we present a lightweight testing checklist that contains both tutorial information and testing strategies to guide students in what and how to test. To assess the impact of the testing checklist, we conducted an experimental, controlled A/B study with 32 undergraduate and graduate students. The study task was writing a test suite for an existing program. Students were given either the testing checklist (the experimental group) or a tutorial on a standard coverage tool with which they were already familiar (the control group). By analyzing the combination of student-written tests and survey responses, we found students with the checklist performed as well as or better than the coverage tool group, suggesting a potential positive impact of the checklist (or at minimum, a non-negative impact). This is particularly noteworthy given the control condition of the coverage tool is the state of the practice. These findings suggest that the testing tool support does not need to be sophisticated to be effective.more » « less
An official website of the United States government

