skip to main content


Title: Effectiveness of Crowd-Sourcing On-Demand Assistance from Teachers in Online Learning Platforms
It has been shown in multiple studies that expert-created on-demand assistance, such as hint messages, improves student learning in online learning environments. However, there are also evident that certain types of assistance may be detrimental to student learning. In addition, creating and maintaining on-demand assistance are hard and time-consuming. In 2017-2018 academic year, 132,738 distinct problems were assigned inside ASSISTments, but only 38,194 of those problems had on-demand assistance. In order to take on-demand assistance to scale, we needed a system that is able to gather new on-demand assistance and allows us to test and measure its effectiveness. Thus, we designed and deployed TeacherASSIST inside ASSISTments. TeacherASSIST allowed teachers to create on-demand assistance for any problems as they assigned those problems to their students. TeacherASSIST then redistributed on-demand assistance by one teacher to students outside of their classrooms. We found that teachers inside ASSISTments had created 40,292 new instances of assistance for 25,957 different problems in three years. There were 14 teachers who created more than 1,000 instances of on-demand assistance. We also conducted two large-scale randomized controlled experiments to investigate how on-demand assistance created by one teacher affected students outside of their classes. Students who received on-demand assistance for one problem resulted in significant statistical improvement on the next problem performance. The students' improvement in this experiment confirmed our hypothesis that crowd-sourced on-demand assistance was sufficient in quality to improve student learning, allowing us to take on-demand assistance to scale.  more » « less
Award ID(s):
1940236 1940093 1636782 1931523 1724889 1931419
NSF-PAR ID:
10191834
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the Seventh ACM Conference on Learning @ Scale (L@S)
Page Range / eLocation ID:
115-124
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. It has been shown in multiple studies that expert-created on demand assistance, such as hint messages, improves student learning in online learning environments. However, there are also evident that certain types of assistance may be detrimental to student learning. In addition, creating and maintaining on-demand assistance are hard and time-consuming. In 2017-2018 academic year, 132,738 distinct problems were assigned inside ASSISTments, but only 38,194 of those problems had on-demand assistance. In order to take on-demand assistance to scale, we needed a system that is able to gather new on-demand assistance and allows us to test and measure its effectiveness. Thus, we designed and deployed TeacherASSIST inside ASSISTments. TeacherASSIST allowed teachers to create on demand assistance for any problems as they assigned those problems to their students. TeacherASSIST then redistributed on-demand assistance by one teacher to students outside of their classrooms. We found that teachers inside ASSISTments had created 40,292 new instances of assistance for 25,957 different problems in three years. There were 14 teachers who created more than 1,000 instances of on-demand assistance. We also conducted two large-scale randomized controlled experiments to investigate how on-demand assistance created by one teacher affected students outside of their classes. Students who received on-demand assistance for one problem resulted in significant statistical improvement on the next problem performance. The students’ improvement in this experiment confirmed our hypothesis that crowd-sourced on demand assistance was sufficient in quality to improve student learning, allowing us to take on-demand assistance to scale. 
    more » « less
  2. The use of computer-based systems in classrooms has provided teachers with new opportunities in delivering content to students, supplementing instruction, and assessing student knowledge and comprehension. Among the largest benefits of these systems is their ability to provide students with feedback on their work and also report student performance and progress to their teacher. While computer-based systems can automatically assess student answers to a range of question types, a limitation faced by many systems is in regard to open-ended problems. Many systems are either unable to provide support for open-ended problems, relying on the teacher to grade them manually, or avoid such question types entirely. Due to recent advancements in natural language processing methods, the automation of essay grading has made notable strides. However, much of this research has pertained to domains outside of mathematics, where the use of open-ended problems can be used by teachers to assess students' understanding of mathematical concepts beyond what is possible on other types of problems. This research explores the viability and challenges of developing automated graders of open-ended student responses in mathematics. We further explore how the scale of available data impacts model performance. Focusing on content delivered through the ASSISTments online learning platform, we present a set of analyses pertaining to the development and evaluation of models to predict teacher-assigned grades for student open responses. 
    more » « less
  3. The use of computer-based systems in classrooms has provided teachers with new opportunities in delivering content to students, supplementing instruction, and assessing student knowledge and comprehension. Among the largest benefits of these systems is their ability to provide students with feedback on their work and also report student performance and progress to their teacher. While computer-based systems can automatically assess student answers to a range of question types, a limitation faced by many systems is in regard to open-ended problems. Many systems are either unable to provide support for open-ended problems, relying on the teacher to grade them manually, or avoid such question types entirely. Due to recent advancements in natural language processing methods, the automation of essay grading has made notable strides. However, much of this research has pertained to domains outside of mathematics, where the use of open-ended problems can be used by teachers to assess students’ understanding of mathematical concepts beyond what is possible on other types of problems. This research explores the viability and challenges of developing automated graders of open-ended student responses in mathematics. We further explore how the scale of available data impacts model performance. Focusing on content delivered through the ASSISTments online learning platform, we present a set of analyses pertaining to the development and evaluation of models to predict teacher-assigned grades for student open responses. 
    more » « less
  4. Math performance continues to be an important focus for improvement. Many districts adopted educational technology programs to support student learning and teacher instruction. The ASSISTments program provides feedback to students as they solve homework problems and automatically prepares reports for teachers about student performance on daily assignments. During the 2018-19 and 2019-20 school years, WestEd led a large-scale randomized controlled trial to replicate the effects of ASSISTments in 63 schools in North Carolina in the US. 32 treatment schools implemented ASSISTments in 7th-grade math classrooms. Recently, we conducted a follow-up analysis to measure the long-term effects of ASSISTments on student performance one year after the intervention, when the students were in 8th grade. The initial results suggested that implementing ASSISTments in 7th grade improved students’ performance in 8th grade and minority students benefited more from the intervention. 
    more » « less
  5. null (Ed.)
    Teacher responses to student mathematical thinking (SMT) matter because the way in which teachers respond affects student learning. Although studies have provided important insights into the nature of teacher responses, little is known about the extent to which these responses take into account the potential of the instance of SMT to support learning. This study investigated teachers’ responses to a common set of instances of SMT with varied potential to support students’ mathematical learning, as well as the productivity of such responses. To examine variations in responses in relation to the mathematical potential of the SMT to which they are responding, we coded teacher responses to instances of SMT in a scenario-based interview. We did so using a scheme that analyzes who interacts with the thinking (Actor), what they are given the opportunity to do in those interactions (Action), and how the teacher response relates to the actions and ideas in the contributed SMT (Recognition). The study found that teachers tended to direct responses to the student who had shared the thinking, use a small subset of actions, and explicitly incorporate students’ actions and ideas. To assess the productivity of teacher responses, we first theorized the alignment of different aspects of teacher responses with our vision of responsive teaching. We then used the data to analyze the extent to which specific aspects of teacher responses were more or less productive in particular circumstances. We discuss these circumstances and the implications of the findings for teachers, professional developers, and researchers. 
    more » « less