skip to main content

Search for: All records

Creators/Authors contains: "Baker, R.S."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Research into "gaming the system" behavior in intelligent tutoring systems (ITS) has been around for almost two decades, and detection has been developed for many ITSs. Machine learning models can detect this behavior in both real-time and in historical data. However, intelligent tutoring system designs often change over time, in terms of the design of the student interface, assessment models, and data collection log schemas. Can gaming detectors still be trusted, a decade or more after they are developed? In this research, we evaluate the robustness/degradation of gaming detectors when trained on older data logs and evaluated on current data logs. We demonstrate that some machine learning models developed using past data are still able to predict gaming behavior from student data collected 16 years later, but that there is considerable variance in how well different algorithms perform over time. We demonstrate that a classic decision tree algorithm maintained its performance while more contemporary algorithms struggled to transfer to new data, even though they exhibited better performance on unseen students in both New and Old data sets by themselves. Examining the feature importance values provides some explanation for the differences in performance between models, and offers some insight into howmore »we might safeguard against detector rot over time.« less
  2. Roll, I. ; McNamara, D. ; Sosnovsky, S. ; Luckin, R. ; Dimitrova, V. (Ed.)
    Scaffolding and providing feedback on problem-solving activities during online learning has consistently been shown to improve performance in younger learners. However, less is known about the impacts of feedback strategies on adult learners. This paper investigates how two computer-based support strategies, hints and required scaffolding questions, contribute to performance and behavior in an edX MOOC with integrated assignments from ASSISTments, a web-based platform that implements diverse student supports. Results from a sample of 188 adult learners indicated that those given scaffolds benefited less from ASSISTments support and were more likely to request the correct answer from the system.
  3. Prior studies have explored the potential of erroneous examples in helping students learn more effectively by correcting errors in solutions to decimal problems. One recent study found that while students experience more confusion and frustration (confrustion) when working with erroneous examples, they demonstrate better retention of decimal concepts. In this study, we investigated whether this finding could be replicated in a digital learning game. In the erroneous examples (ErrEx) version of the game, students saw a character play the games and make mistakes, and then they corrected the characters’ errors. In the problem solving (PS) version, students played the games by themselves. We found that confrustion was significantly, negatively correlated with performance in both pretest (r = -.62, p < .001) and posttest (r = -.68, p < .001) and so was gaming the system (pretest r = -.58, p < .001, posttest r = -.66, p < .001). Posthoc (Tukey) tests indicated that students who did not see any erroneous examples (PS-only) experienced significantly lower levels of confrustion (p < .001) and gaming (p < .001). While we did not find significant differences in post-test performance across conditions, our findings show that students working with erroneous examples experience consistentlymore »higher levels of confrustion in both game and non-game contexts.« less
  4. Recent student knowledge modeling algorithms such as Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Networks (DKVMN) have been shown to produce accurate predictions of problem correctness within the same learning system. However, these algorithms do not attempt to directly infer student knowledge. In this paper we present an extension to these algorithms to also infer knowledge. We apply this extension to DKT and DKVMN, resulting in knowledge estimates that correlate better with a posttest than knowledge estimates from Bayesian Knowledge Tracing (BKT), an algorithm designed to infer knowledge, and another classic algorithm, Performance Factors Analysis (PFA). We also apply our extension to correctness predictions from BKT and PFA, finding that knowledge estimates produced with it correlate better with the posttest than BKT and PFA’s standard knowledge estimates. These findings are significant since the primary aim of education is to prepare students for later experiences outside of the immediate learning activity.