skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Comparing Bayesian Knowledge Tracing Model Against Naïve Mastery Model
We conducted a study to see if using Bayesian Knowledge Tracing (BKT) models would save time and problems in programming tutors. We used legacy data collected by two programming tutors to compute BKT models for every concept covered by each tutor. The novelty of our model was that slip and guess parameters were computed for every problem presented by each tutor. Next, we used cross-validation to evaluate whether the resulting BKT model would have reduced the number of practice problems solved and time spent by the students represented in the legacy data. We found that in 64.23% of the concepts, students would have saved time with the BKT model. The savings varied among concepts. Overall, students would have saved a mean of 1.28 minutes and 1.23 problems per concept. We also found that BKT models were more effective at saving time and problems on harder concepts.  more » « less
Award ID(s):
1432190
PAR ID:
10301923
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Intelligent Tutoring Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We studied long-term retention of the concepts that introductory programming students learned using two software tutors on tracing the behavior of functions and debugging functions. Whereas the concepts covered by the tutor on the behavior of functions were interdependent, the concepts covered by debugging tutor were independent. We analyzed the data of the students who had used the tutors more than once, hours to weeks apart. Our objective was to find whether students retained what they had learned during the first session till the second session. We found that the more the problems students solved during the first session, the greater the retention. Knowledge and retention varied between debugging and behavior tutors, even though they both dealt with functions, possibly because de-bugging tutor covered independent concepts whereas behavior tutor covered interdependent concepts. 
    more » « less
  2. Do students retain the programming concepts they have learned using software tutors over the long term? In order to answer this question, we analyzed the data collected by a software tutor on selection statements. We used the data of the students who used the tutor more than once to see whether they had retained for the second session what they had learned during the first session. We found that students retained over 71% of selection concepts that they had learned during the first session. The more problems students solved during the first session, the greater the percentage of retention. Even when students already knew a concept and did not benefit from using the tutor, a small percentage of concepts were for-gotten from the first session to the next, corresponding to transience of learning. Transience of learning varied with concepts. We list confounding factors of the study. 
    more » « less
  3. We conducted a controlled study to investigate whether having students choose the concept on which to solve each practice problem in an adaptive tutor helped improve learning. We analyzed data from an adaptive tutor used by introductory programming students over three semesters. The tutor presented code-tracing problems, used pretest-practice-post-test protocol, and presented line-by-line explanation of the correct solution as feedback. We found that choice did not in-crease the amount of learning or pace of learning. But, it resulted in greater improvement in score on learned concepts, and the effect size was medium. 
    more » « less
  4. null (Ed.)
    We developed a tutor for imperative programming in C++. It covers algorithm formulation, program design and coding – all three stages involved in writing a program to solve a problem. The design of the tutor is epistemic, i.e., true to real-life programming practice. The student works through all the three stages of programming in interleaved fashion, and within the context of a single code canvas. The student has the sole agency to compose the program and write the code. The tutor uses goals and plans as prompts to scaffold the student through the programming process designed by an expert. It provides drill-down immediate feed-back at the abstract, concrete and bottom-out levels at each step. So, by the end of the session, the student is guaranteed to write the complete and correct program for a given problem. We used model-based architecture to implement the tutor be-cause of the ease with which it facilitates adding problems to the tutor. In a preliminary study, we found that practicing with the tutor helped students solve problems with fewer erroneous actions and less time. 
    more » « less
  5. null (Ed.)
    Abstract: Modeling student learning processes is highly complex since it is influenced by many factors such as motivation and learning habits. The high volume of features and tools provided by computer-based learning environments confounds the task of tracking student knowledge even further. Deep Learning models such as Long-Short Term Memory (LSTMs) and classic Markovian models such as Bayesian Knowledge Tracing (BKT) have been successfully applied for student modeling. However, much of this prior work is designed to handle sequences of events with discrete timesteps, rather than considering the continuous aspect of time. Given that time elapsed between successive elements in a student’s trajectory can vary from seconds to days, we applied a Timeaware LSTM (T-LSTM) to model the dynamics of student knowledge state in continuous time. We investigate the effectiveness of T-LSTM on two domains with very different characteristics. One involves an open-ended programming environment where students can self-pace their progress and T-LSTM is compared against LSTM, Recent Temporal Pattern Mining, and the classic Logistic Regression (LR) on the early prediction of student success; the other involves a classic tutor-driven intelligent tutoring system where the tutor scaffolds the student learning step by step and T-LSTM is compared with LSTM, LR, and BKT on the early prediction of student learning gains. Our results show that TLSTM significantly outperforms the other methods on the self-paced, open-ended programming environment; while on the tutor-driven ITS, it ties with LSTM and outperforms both LR and BKT. In other words, while time-irregularity exists in both datasets, T-LSTM works significantly better than other student models when the pace is driven by students. On the other hand, when such irregularity results from the tutor, T-LSTM was not superior to other models but its performance was not hurt either. 
    more » « less