skip to main content


Title: Comparing Bayesian Knowledge Tracing Model Against Naïve Mastery Model
We conducted a study to see if using Bayesian Knowledge Tracing (BKT) models would save time and problems in programming tutors. We used legacy data collected by two programming tutors to compute BKT models for every concept covered by each tutor. The novelty of our model was that slip and guess parameters were computed for every problem presented by each tutor. Next, we used cross-validation to evaluate whether the resulting BKT model would have reduced the number of practice problems solved and time spent by the students represented in the legacy data. We found that in 64.23% of the concepts, students would have saved time with the BKT model. The savings varied among concepts. Overall, students would have saved a mean of 1.28 minutes and 1.23 problems per concept. We also found that BKT models were more effective at saving time and problems on harder concepts.  more » « less
Award ID(s):
1432190
NSF-PAR ID:
10301923
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Intelligent Tutoring Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Do students retain the programming concepts they have learned using software tutors over the long term? In order to answer this question, we analyzed the data collected by a software tutor on selection statements. We used the data of the students who used the tutor more than once to see whether they had retained for the second session what they had learned during the first session. We found that students retained over 71% of selection concepts that they had learned during the first session. The more problems students solved during the first session, the greater the percentage of retention. Even when students already knew a concept and did not benefit from using the tutor, a small percentage of concepts were for-gotten from the first session to the next, corresponding to transience of learning. Transience of learning varied with concepts. We list confounding factors of the study. 
    more » « less
  2. null (Ed.)
    We studied long-term retention of the concepts that introductory programming students learned using two software tutors on tracing the behavior of functions and debugging functions. Whereas the concepts covered by the tutor on the behavior of functions were interdependent, the concepts covered by debugging tutor were independent. We analyzed the data of the students who had used the tutors more than once, hours to weeks apart. Our objective was to find whether students retained what they had learned during the first session till the second session. We found that the more the problems students solved during the first session, the greater the retention. Knowledge and retention varied between debugging and behavior tutors, even though they both dealt with functions, possibly because de-bugging tutor covered independent concepts whereas behavior tutor covered interdependent concepts. 
    more » « less
  3. We conducted a controlled study to investigate whether having students choose the concept on which to solve each practice problem in an adaptive tutor helped improve learning. We analyzed data from an adaptive tutor used by introductory programming students over three semesters. The tutor presented code-tracing problems, used pretest-practice-post-test protocol, and presented line-by-line explanation of the correct solution as feedback. We found that choice did not in-crease the amount of learning or pace of learning. But, it resulted in greater improvement in score on learned concepts, and the effect size was medium. 
    more » « less
  4. null (Ed.)
    Abstract: Modeling student learning processes is highly complex since it is influenced by many factors such as motivation and learning habits. The high volume of features and tools provided by computer-based learning environments confounds the task of tracking student knowledge even further. Deep Learning models such as Long-Short Term Memory (LSTMs) and classic Markovian models such as Bayesian Knowledge Tracing (BKT) have been successfully applied for student modeling. However, much of this prior work is designed to handle sequences of events with discrete timesteps, rather than considering the continuous aspect of time. Given that time elapsed between successive elements in a student’s trajectory can vary from seconds to days, we applied a Timeaware LSTM (T-LSTM) to model the dynamics of student knowledge state in continuous time. We investigate the effectiveness of T-LSTM on two domains with very different characteristics. One involves an open-ended programming environment where students can self-pace their progress and T-LSTM is compared against LSTM, Recent Temporal Pattern Mining, and the classic Logistic Regression (LR) on the early prediction of student success; the other involves a classic tutor-driven intelligent tutoring system where the tutor scaffolds the student learning step by step and T-LSTM is compared with LSTM, LR, and BKT on the early prediction of student learning gains. Our results show that TLSTM significantly outperforms the other methods on the self-paced, open-ended programming environment; while on the tutor-driven ITS, it ties with LSTM and outperforms both LR and BKT. In other words, while time-irregularity exists in both datasets, T-LSTM works significantly better than other student models when the pace is driven by students. On the other hand, when such irregularity results from the tutor, T-LSTM was not superior to other models but its performance was not hurt either. 
    more » « less
  5. Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making. 
    more » « less