skip to main content


Search for: All records

Award ID contains: 1724889

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    A s m or e e d u c at or s i nt e gr at e t h eir c urri c ul a wit h o nli n e l e ar ni n g, it i s e a si er t o cr o w d s o ur c e c o nt e nt fr o m t h e m. Cr o w ds o ur c e d t ut ori n g h a s b e e n pr o v e n t o r eli a bl y i n cr e a s e st u d e nt s’ n e xt pr o bl e m c orr e ct n e s s. I n t hi s w or k, w e c o n fir m e d t h e fi n di n g s of a pr e vi o u s st u d y i n t hi s ar e a, wit h str o n g er c o n fi d e n c e m ar gi n s t h a n pr e vi o u sl y, a n d r e v e al e d t h at o nl y a p orti o n of cr o w d s o ur c e d c o nt e nt cr e at or s h a d a r eli a bl e b e n e fit t o st ud e nt s. F urt h er m or e, t hi s w or k pr o vi d e s a m et h o d t o r a n k c o nt e nt cr e at or s r el ati v e t o e a c h ot h er, w hi c h w a s u s e d t o d et er mi n e w hi c h c o nt e nt cr e at or s w er e m o st eff e cti v e o v er all, a n d w hi c h c o nt e nt cr e at or s w er e m o st eff e cti v e f or s p e ci fi c gr o u p s of st u d e nt s. W h e n e x pl ori n g d at a fr o m Te a c h er A SSI S T, a f e at ur e wit hi n t h e A S SI S T m e nt s l e ar ni n g pl atf or m t h at cr o w d s o ur c e s t ut ori n g fr o m t e a c h er s, w e f o u n d t h at w hil e o v erall t hi s pr o gr a m pr o vi d e s a b e n e fit t o st u d e nt s, s o m e t e a c h er s cr e at e d m or e eff e cti v e c o nt e nt t h a n ot h er s. D e s pit e t hi s fi n di n g, w e di d n ot fi n d e vi d e n c e t h at t h e eff e cti v e n e s s of c o nt e nt r eli a bl y v ari e d b y st u d e nt k n o wl e d g e-l e v el, s u g g e sti n g t h at t h e c o nt e nt i s u nli k el y s uit a bl e f or p er s o n ali zi n g i n str u cti o n b a s e d o n st u d e nt k n o wl e d g e al o n e. T h e s e fi n di n g s ar e pr o mi si n g f or t h e f ut ur e of cr o w d s o ur c e d t ut ori n g a s t h e y h el p pr o vi d e a f o u n d ati o n f or a s s e s si n g t h e q u alit y of cr o w d s o ur c e d c o nt e nt a n d i n v e sti g ati n g c o nt e nt f or o p p ort u niti e s t o p er s o n ali z e st u d e nt s’ e d u c ati o n. 
    more » « less
  2. Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers on their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback to student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon this prior research by introducing a model that is designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We find that this model outperforms previously published benchmarks across three different metrics. With this model, we conduct an error analysis to examine characteristics of student responses that may be considered to further improve the method. 
    more » « less
  3. null (Ed.)
  4. null (Ed.)
  5. Educational content labeled with proper knowledge components (KCs) are particularly useful to teachers or content organizers. However, manually labeling educational content is labor intensive and error-prone. To address this challenge, prior research proposed machine learning based solutions to auto-label educational content with limited success. In this work, we significantly improve prior research by (1) expanding the input types to include KC descriptions, instructional video titles, and problem descriptions (i.e., three types of prediction task), (2) doubling the granularity of the prediction from 198 to 385 KC labels (i.e., more practical setting but much harder multinomial classification problem), (3) improving the prediction accuracies by 0.5–2.3% using Task-adaptive Pre-trained BERT, outperforming six baselines, and (4) proposing a simple evaluation measure by which we can recover 56–73% of mispredicted KC labels. All codes and data sets in the experiments are available at: https://github.com/tbs17/TAPT-BERT Keywords 
    more » « less
  6. Online education technologies, such as intelligent tutoring systems, have garnered popularity for their automation. Whether it be automated support systems for teachers (grading, feedback, summary statistics, etc.) or support systems for students (hints, common wrong answer messages, scaffolding), these systems have built a well rounded support system for both students and teachers alike. The automation of these online educational technologies, such as intelligent tutoring systems, have often been limited to questions with well structured answers such as multiple choice or fill in the blank. Recently, these systems have begun adopting support for a more diverse set of question types. More specifically, open response questions. A common tool for developing automated open response tools, such as automated grading or automated feedback, are pre-trained word embeddings. Recent studies have shown that there is an underlying bias within the text these were trained on. This research aims to identify what level of unfairness may lie within machine learned algorithms which utilize pre-trained word embeddings. We attempt to identify if our ability to predict scores for open response questions vary for different groups of student answers. For instance, whether a student who uses fractions as opposed to decimals. By performing a simulated study, we are able to identify the potential unfairness within our machine learned models with pre-trained word embeddings. 
    more » « less
  7. Similar content has tremendous utility in classroom and online learning environments. For example, similar content can be used to combat cheating, track students’ learning over time, and model students’ latent knowledge. These different use cases for similar content all rely on different notions of similarity, which make it difficult to determine contents’ similarities. Crowdsourcing is an effective way to identify similar content in a variety of situations by providing workers with guidelines on how to identify similar content for a particular use case. However, crowdsourced opinions are rarely homogeneous and therefore must be aggregated into what is most likely the truth. This work presents the Dynamically Weighted Majority Vote method. A novel algorithm that combines aggregating workers’ crowdsourced opinions with estimating the reliability of each worker. This method was compared to the traditional majority vote method in both a simulation study and an empirical study, in which opinions on seventh grade mathematics problems’ similarity were crowdsourced from middle school math teachers and college students. In both the simulation and the empirical study the Dynamically Weighted Majority Vote method outperformed the traditional majority vote method, suggesting that this method should be used instead of majority vote in future crowdsourcing endeavors. 
    more » « less
  8. Roll, I. ; McNamara, D. ; Sosnovsky, S. ; Luckin, R. ; Dimitrova, V. (Ed.)
    Scaffolding and providing feedback on problem-solving activities during online learning has consistently been shown to improve performance in younger learners. However, less is known about the impacts of feedback strategies on adult learners. This paper investigates how two computer-based support strategies, hints and required scaffolding questions, contribute to performance and behavior in an edX MOOC with integrated assignments from ASSISTments, a web-based platform that implements diverse student supports. Results from a sample of 188 adult learners indicated that those given scaffolds benefited less from ASSISTments support and were more likely to request the correct answer from the system. 
    more » « less
  9. This special issue includes papers from some of the leading competitors in the ASSISTments Longitudinal Data Mining Competition 2017, as well as some research from non-competitors, using the same data set. In this competition, participants attempted to predict whether students would choose a career in a STEM field or not, making this prediction using a click-stream dataset from middle school students working on math assignments inside ASSISTments, an online tutoring platform. At the conclusion of the competition on December 3rd, 2017, there were 202 participants, 74 of whom submitted predictions at least once. In this special issue, some of the leading competitors present their results and what they have learned about the link between behavior in online learning and future STEM career development. 
    more » « less
  10. It has been shown in multiple studies that expert-created on-demand assistance, such as hint messages, improves student learning in online learning environments. However, there are also evident that certain types of assistance may be detrimental to student learning. In addition, creating and maintaining on-demand assistance are hard and time-consuming. In 2017-2018 academic year, 132,738 distinct problems were assigned inside ASSISTments, but only 38,194 of those problems had on-demand assistance. In order to take on-demand assistance to scale, we needed a system that is able to gather new on-demand assistance and allows us to test and measure its effectiveness. Thus, we designed and deployed TeacherASSIST inside ASSISTments. TeacherASSIST allowed teachers to create on-demand assistance for any problems as they assigned those problems to their students. TeacherASSIST then redistributed on-demand assistance by one teacher to students outside of their classrooms. We found that teachers inside ASSISTments had created 40,292 new instances of assistance for 25,957 different problems in three years. There were 14 teachers who created more than 1,000 instances of on-demand assistance. We also conducted two large-scale randomized controlled experiments to investigate how on-demand assistance created by one teacher affected students outside of their classes. Students who received on-demand assistance for one problem resulted in significant statistical improvement on the next problem performance. The students' improvement in this experiment confirmed our hypothesis that crowd-sourced on-demand assistance was sufficient in quality to improve student learning, allowing us to take on-demand assistance to scale. 
    more » « less