skip to main content


Title: Toward Improving Effectiveness of Crowdsourced, On-Demand Assistance From Educators in Online Learning Platforms
Studies have proven that providing on-demand assistance, additional instruction on a problem when a student requests it, improves student learning in online learning environments. Additionally, crowdsourced, on-demand assistance generated from educators in the field is also effective. However, when provided on-demand assistance in these studies, students received assistance using problem-based randomization, where each condition represents a different assistance, for every problem encountered. As such, claims about a given educator’s effectiveness are provided on a per-assistance basis and not easily generalizable across all students and problems. This work aims to provide stronger claims on which educators are the most effective at generating on-demand assistance. Students will receive on-demand assistance using educator-based randomization, where each condition represents a different educator who has generated a piece of assistance, allowing students to be kept in the same condition over longer periods of time. Furthermore, this work also attempts to find additional benefits to providing students assistance generated by the same educator compared to a random assistance available for the given problem. All data and analysis being conducted can be found on the Open Science Foundation website  more » « less
Award ID(s):
1840771
NSF-PAR ID:
10374331
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Educational Data Mining Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Studies have shown that on-demand assistance, additional instruction given on a problem per student request, improves student learning in online learning environments. Students may have opinions on whether an assistance was effective at improving student learning. As students are the driving force behind the effectiveness of assistance, there could exist a correlation between students’ perceptions of effectiveness and the computed effectiveness of the assistance. This work conducts a survey asking secondary education students on whether a given assistance is effective in solving a problem in an online learning platform. It then provides a cursory glance at the data to view whether a correlation exists between student perception and the measured effectiveness of an assistance. Over a three year period, approximately twenty-two thousand responses were collected across nearly four thousand, four hundred students. Initial analyses of the survey suggest no significance in the relationship between student perception and computed effectiveness of an assistance, regardless of if the student participated in the survey. All data and analysis conducted can be found on the Open Science Foundation website. 
    more » « less
  2. Studies have shown that on-demand assistance, additional instruction given on a problem per student request, improves student learning in online learning environments. Students may have opinions on whether an assistance was effective at improving student learning. As students are the driving force behind the effectiveness of assistance, there could exist a correlation between students’ perceptions of effectiveness and the computed effectiveness of the assistance. This work conducts a survey asking secondary education students on whether a given assistance is effective in solving a problem in an online learning platform. It then provides a cursory glance at the data to view whether a correlation exists between student perception and the measured effectiveness of an assistance. Over a three year period, approximately twenty-two thousand responses were collected across nearly four thousand, four hundred students. Initial analyses of the survey suggest no significance in the relationship between student perception and computed effectiveness of an assistance, regardless of if the student participated in the survey. All data and analysis conducted can be found on the Open Science Foundation website. 
    more » « less
  3. This evidence-based practices paper discusses the method employed in validating the use of a project modified version of the PROCESS tool (Grigg, Van Dyken, Benson, & Morkos, 2013) for measuring student problem solving skills. The PROCESS tool allows raters to score students’ ability in the domains of Problem definition, Representing the problem, Organizing information, Calculations, Evaluating the solution, Solution communication, and Self-assessment. Specifically, this research compares student performance on solving traditional textbook problems with novel, student-generated learning activities (i.e. reverse engineering videos in order to then create their own homework problem and solution). The use of student-generated learning activities to assess student problem solving skills has theoretical underpinning in Felder’s (1987) work of “creating creative engineers,” as well as the need to develop students’ abilities to transfer learning and solve problems in a variety of real world settings. In this study, four raters used the PROCESS tool to score the performance of 70 students randomly selected from two undergraduate chemical engineering cohorts at two Midwest universities. Students from both cohorts solved 12 traditional textbook style problems and students from the second cohort solved an additional nine student-generated video problems. Any large scale assessment where multiple raters use a rating tool requires the investigation of several aspects of validity. The many-facets Rasch measurement model (MFRM; Linacre, 1989) has the psychometric properties to determine if there are any characteristics other than “student problem solving skills” that influence the scores assigned, such as rater bias, problem difficulty, or student demographics. Before implementing the full rating plan, MFRM was used to examine how raters interacted with the six items on the modified PROCESS tool to score a random selection of 20 students’ performance in solving one problem. An external evaluator led “inter-rater reliability” meetings where raters deliberated rationale for their ratings and differences were resolved by recourse to Pretz, et al.’s (2003) problem-solving cycle that informed the development of the PROCESS tool. To test the new understandings of the PROCESS tool, raters were assigned to score one new problem from a different randomly selected group of six students. Those results were then analyzed in the same manner as before. This iterative process resulted in substantial increases in reliability, which can be attributed to increased confidence that raters were operating with common definitions of the items on the PROCESS tool and rating with consistent and comparable severity. This presentation will include examples of the student-generated problems and a discussion of common discrepancies and solutions to the raters’ initial use of the PROCESS tool. Findings as well as the adapted PROCESS tool used in this study can be useful to engineering educators and engineering education researchers. 
    more » « less
  4. This research explores a novel human-in-the-loop approach that goes beyond traditional prompt engineering approaches to harness Large Language Models (LLMs) with chain-of-thought prompting for grading middle school students’ short answer formative assessments in science and generating useful feedback. While recent efforts have successfully applied LLMs and generative AI to automatically grade assignments in secondary classrooms, the focus has primarily been on providing scores for mathematical and programming problems with little work targeting the generation of actionable insight from the student responses. This paper addresses these limitations by exploring a human-in-the-loop approach to make the process more intuitive and more effective. By incorporating the expertise of educators, this approach seeks to bridge the gap between automated assessment and meaningful educational support in the context of science education for middle school students. We have conducted a preliminary user study, which suggests that (1) co-created models improve the performance of formative feedback generation, and (2) educator insight can be integrated at multiple steps in the process to inform what goes into the model and what comes out. Our findings suggest that in-context learning and human-in-the-loop approaches may provide a scalable approach to automated grading, where the performance of the automated LLM-based grader continually improves over time, while also providing actionable feedback that can support students’ open-ended science learning. 
    more » « less
  5. Early in the pandemic we gathered a group of educators to create and share at-home educational opportunities for families to design and make STEAM projects while at home. As this effort, CoBuild19, continued, we decided to extend our offerings to include basic computer programming. To accomplish this, we created an offering called the Design with Code Club (DwCC). We structured DwCC to be different from other common coding offerings in that we wanted the main focus to be on kids designing solutions to problems that might include the use of technology and coding. We were purposeful in this decision for two main reasons. First, we wanted to make our coding club more interesting to girls, where previous research demonstrates their interest in designing solutions. Second, we wanted this effort to be different from most programming instruction, where coding activities use programming as the core of instruction and application in authentic and student-selected contexts plays a secondary role. DwCC was set up so that each of the first four weeks had a different larger challenge that was COVID-19 related and sessions unfolded with alternating smaller challenges, discussion around design and coding instruction that would develop their skills and knowledge of micro:bit capabilities. We culminated DwCC with an open-ended project where the kids were given the challenge of coming up with their own problem for which they might incorporate micro:bit as part of the solution. Because we were doing all of this online, we used the micro:bit interface through Microsoft MakeCode, which includes a functional simulator. From our experiences we realized that simulations are not as enticing as physical computing with a tangible device, so we set up an incentive where youth who participated in at least three sessions of the club would receive a physical micro:bit. We advertised DwCC through Facebook and twitter and had nearly 200 families register their kids to participate. In the end, a total of 52 micro:bits were sent to youth participants. Based on this success, we sought to expand the effort and increase accessibility for groups that are traditionally underrepresented in STEM. In spring 2021, we offered a Girls DwCC. This was a redesigned version of the club where the focus was even more on problem-solving through design. The club was run by all women, including one from the US, an Industrial Engineer from Mexico and a computer programmer from Albania. More than 50 girls from 17 countries participated in the club! We are working on another version of GDwCC that will be offered in Spanish and focus on Latina girls in the US and Mexico. In the most recent iteration of DwCC we are working with an educator at a school for deaf students to create a version of the club that works for their students. We are doing some modification of activities and recreating videos that involve sign language interpretation. In this presentation we will report on the variants of DwCC, results from participant feedback surveys and plans for future versions. 
    more » « less