Personalized learning environments requiring the elicitation
of a student’s knowledge state have inspired researchers to
propose distinct models to understand that knowledge state.
Recently, the spotlight has shone on comparisons between
traditional, interpretable models such as Bayesian Knowledge
Tracing (BKT) and complex, opaque neural network
models such as Deep Knowledge Tracing (DKT). Although
DKT appears to be a powerful predictive model, little effort
has been expended to dissect the source of its strength.
We begin with the observation that DKT differs from BKT
along three dimensions: (1) DKT is a neural network with
many free parameters, whereas BKT is a probabilistic model
with few free parameters; (2) a single instance of DKT is
used to model all skills in a domain, whereas a separate
instance of BKT is constructed for each skill; and (3) the input
to DKT interlaces practice from multiple skills, whereas
the input to BKT is separated by skill. We tease apart these
three dimensions by constructing versions of DKT which are
trained on single skills and which are trained on sequences
separated by skill. Exploration of three data sets reveals
that dimensions (1) and (3) are critical; dimension (2) is
not. Our investigation gives us insight into the structural
regularities in the data that DKT is able to exploit but that
BKT cannot.
more »
« less
Extending Deep Knowledge Tracing: Inferring Interpretable Knowledge and Predicting Post System Performance
Recent student knowledge modeling algorithms such as Deep Knowledge Tracing
(DKT) and Dynamic Key-Value Memory Networks (DKVMN) have been shown to produce
accurate predictions of problem correctness within the same learning system. However, these algorithms do not attempt to directly infer student knowledge. In this paper we present an extension to these algorithms to also infer knowledge. We apply this extension to DKT and DKVMN, resulting in knowledge estimates that correlate better with a posttest than knowledge estimates from Bayesian Knowledge Tracing (BKT), an algorithm designed to infer knowledge, and another classic algorithm, Performance Factors Analysis (PFA). We also apply our extension to correctness predictions from BKT and PFA, finding that knowledge estimates produced with it correlate better with the posttest than BKT and PFA’s standard knowledge estimates. These findings are significant since the primary aim of education is to prepare students for later experiences outside of the immediate learning activity.
more »
« less
- Award ID(s):
- 1661153
- PAR ID:
- 10226385
- Date Published:
- Journal Name:
- Proceedings of the 28th International Conference on Computers in Education
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We report work-in-progress that aims to better understand prediction performance differences between Deep Knowledge Tracing (DKT) and Bayesian Knowledge Tracing (BKT) as well as “gaming the system” behavior by considering variation in features and design across individual pieces of instructional content. Our“non-monolithic”analysis considers hundreds of “workspaces” in Carnegie Learning’s MATHia intelligent tutoring system and the extent to which two relatively simple features extracted from MATHia logs, potentially related to gaming the system behavior, are correlated with differences in DKT and BKT prediction performance. We then take a closer look at a set of six MATHia workspaces, three of which represent content in which DKT out-performs BKT and three of which represent content in which BKT out-performs DKT or there is little difference in performance between the approaches. We present some preliminary findings related to the extent to which students game the system in these workspaces, across two school years, as well as other facets of variability across these pieces of instructional content. We conclude with a road map for scaling these analyses over much larger sets of MATHia workspaces and learner data.more » « less
-
Wang, N. ; Rebolledo-Mendez, G. ; Matsuda, N. ; Santos, O.C. ; Dimitrova, V. (Ed.)Students use learning analytics systems to make day-to-day learning decisions, but may not understand their potential flaws. This work delves into student understanding of an example learning analytics algorithm, Bayesian Knowledge Tracing (BKT), using Cognitive Task Analysis (CTA) to identify knowledge components (KCs) comprising expert student understanding. We built an interactive explanation to target these KCs and performed a controlled experiment examining how varying the transparency of limitations of BKT impacts understanding and trust. Our results show that, counterintuitively, providing some information on the algorithm’s limitations is not always better than providing no information. The success of the methods from our BKT study suggests avenues for the use of CTA in systematically building evidence-based explanations to increase end user understanding of other complex AI algorithms in learning analytics as well as other domains.more » « less
-
Wang, N. ; Rebolledo-Mendez, G. ; Matsuda, N. ; Santos, O.C. ; Dimitrova, V. (Ed.)Students use learning analytics systems to make day-to-day learning decisions, but may not understand their potential flaws. This work delves into student understanding of an example learning analytics algorithm, Bayesian Knowledge Tracing (BKT), using Cognitive Task Analysis (CTA) to identify knowledge components (KCs) comprising expert student understanding. We built an interactive explanation to target these KCs and performed a controlled experiment examining how varying the transparency of limitations of BKT impacts understanding and trust. Our results show that, counterintuitively, providing some information on the algorithm’s limitations is not always better than providing no information. The success of the methods from our BKT study suggests avenues for the use of CTA in systematically building evidence-based explanations to increase end user understanding of other complex AI algorithms in learning analytics as well as other domains.more » « less
-
The use of Bayesian Knowledge Tracing (BKT) models in predicting student learning and mastery, especially in mathematics, is a well-established and proven approach in learning analytics. In this work, we report on our analysis examining the generalizability of BKT models across academic years attributed to ”detector rot.” We compare the generalizability of Knowledge Training (KT) models by comparing model performance in predicting student knowledge within the academic year and across academic years. Models were trained on data from two popular open-source curricula available through Open Educational Resources. We observed that the models generally were highly performant in predicting student learning within an academic year, whereas certain academic years were more generalizable than other academic years. We posit that the Knowledge Tracing models are relatively stable in terms of performance across academic years yet can still be susceptible to systemic changes and underlying learner behavior. As indicated by the evidence in this paper, we posit that learning platforms leveraging KT models need to be mindful of systemic changes or drastic changes in certain user demographics.more » « less