skip to main content

Title: Rank-Based Tensor Factorization for Student Performance Prediction
One of the essential problems, in educational data mining, is to predict students' performance on future learning materials, such as problems, assignments, and quizzes. Pioneer algorithms for predicting student performance mostly rely on two sources of information: students' past performance, and learning materials' domain knowledge model. The domain knowledge model, traditionally curated by domain experts maps learning materials to concepts, topics, or knowledge components that are presented in them. However, creating a domain model by manually labeling the learning material can be a difficult and time-consuming task. In this paper, we propose a tensor factorization model for student performance prediction that does not rely on a predefined domain model. Our proposed algorithm models student knowledge as a soft membership of latent concepts. It also represents the knowledge acquisition process with an added rank-based constraint in the tensor factorization objective function. Our experiments show that the proposed model outperforms state-of-the-art algorithms in predicting student performance in two real-world datasets, and is robust to hyper-parameters.
Authors:
;
Award ID(s):
1755910
Publication Date:
NSF-PAR ID:
10185066
Journal Name:
12th International Conference on Educational Data Mining (EDM)
Page Range or eLocation-ID:
288-293
Sponsoring Org:
National Science Foundation
More Like this
  1. Students acquire knowledge as they interact with a variety of learning materials, such as video lectures, problems, and discussions. Modeling student knowledge at each point during their learning period and understanding the contribution of each learning material to student knowledge are essential for detecting students’ knowledge gaps and recommending learning materials to them. Current student knowledge modeling techniques mostly rely on one type of learning material, mainly problems, to model student knowledge growth. These approaches ignore the fact that students also learn from other types of material. In this paper, we propose a student knowledge model that can capture knowledge growth as a result of learning from a diverse set of learning resource types while unveiling the association between the learning materials of different types. Our multi-view knowledge model (MVKM) incorporates a flexible knowledge increase objective on top of a multi-view tensor factorization to capture occasional forgetting while representing student knowledge and learning material concepts in a lower-dimensional latent space. We evaluate our model in different experiments to show that it can accurately predict students’ future performance, differentiate between knowledge gain in different student groups and concepts, and unveil hidden similarities across learning materials of different types.
  2. Knowledge Tracing (KT), which aims to model student knowledge level and predict their performance, is one of the most important applications of user modeling. Modern KT approaches model and maintain an up-to-date state of student knowledge over a set of course concepts according to students’ historical performance in attempting the problems. However, KT approaches were designed to model knowledge by observing relatively small problem-solving steps in Intelligent Tutoring Systems. While these approaches were applied successfully to model student knowledge by observing student solutions for simple problems, such as multiple-choice questions, they do not perform well for modeling complex problem solving in students. Most importantly, current models assume that all problem attempts are equally valuable in quantifying current student knowledge. However, for complex problems that involve many concepts at the same time, this assumption is deficient. It results in inaccurate knowledge states and unnecessary fluctuations in estimated student knowledge, especially if students guess the correct answer to a problem that they have not mastered all of its concepts or slip in answering the problem that they have already mastered all of its concepts. In this paper, we argue that not all attempts are equivalently important in discovering students’ knowledge state, andmore »some attempts can be summarized together to better represent student performance. We propose a novel student knowledge tracing approach, Granular RAnk based TEnsor factorization (GRATE), that dynamically selects student attempts that can be aggregated while predicting students’ performance in problems and discovering the concepts presented in them. Our experiments on three real-world datasets demonstrate the improved performance of GRATE, compared to the state-of-the-art baselines, in the task of student performance prediction. Our further analysis shows that attempt aggregation eliminates the unnecessary fluctuations from students’ discovered knowledge states and helps in discovering complex latent concepts in the problems.« less
  3. The state of the art knowledge tracing approaches mostly model student knowledge using their performance in assessed learning resource types, such as quizzes, assignments, and exercises, and ignore the non-assessed learning resources. However, many student activities are non-assessed, such as watching video lectures, participating in a discussion forum, and reading a section of a textbook, all of which potentially contributing to the students' knowledge growth. In this paper, we propose the  first novel deep learning based knowledge tracing model (DMKT) that explicitly model student's knowledge transitions over both assessed and non-assessed learning activities. With DMKT we can discover the underlying latent concepts of each non-assessed and assessed learning material and better predict the student performance in future assessed learning resources. We compare our proposed method with various state of the art knowledge tracing methods on four real-world datasets and show its effectiveness in predicting student performance, representing student knowledge, and discovering the underlying domain model.
  4. Two different implementations of PBL projects in a fluid mechanics course are presented in this paper. This required junior-level course has been taught since 2014 by the same instructor. The first PBL project presented is a complete design of pumped pipeline systems for a hypothetical plant. In the second project, engineering students partnered with pre-service teachers to design and teach an elementary school lesson on fluid mechanics concepts. The goal of this paper is to present the experiences of the authors with both PBL implementations. It explains how the projects were scaffolded through the entire semester, including how the sequence of course content was modified, how team dynamics were monitored, the faculty roles, and the end products and presentations. To evaluate and compare students’ learning and satisfaction with the team experience between the two PBL implementations, a shortened version of the NCEES FE exam and the Comprehensive Assessment of Team Member Effectiveness (CATME) survey were utilized. Students completed the FE exam during the first week and then again during the last week of the semester to assess students’ growth in fluid mechanics knowledge. The CATME survey was completed mid-semester to help faculty identify and address problems within team dynamics, andmore »at the end of the semester to evaluate individual students’ teamwork performance. The results showed that the type of PBL approach used in the course did not have an impact on fluid mechanics content knowledge; however, the data suggests that the cross-disciplinary PBL model led to higher levels of teamwork satisfaction. Through reflective assignments, student perceptions of the PBL implementations are discussed in the paper. Finally, some of the PBL course materials and assignments are provided.« less
  5. Bayesian Knowledge Tracing (BKT) is a commonly used approach for student modeling, and Long Short Term Memory (LSTM) is a versatile model that can be applied to a wide range of tasks, such as language translation. In this work, we directly compared three models: BKT, its variant Intervention-BKT (IBKT), and LSTM, on two types of student modeling tasks: post-test scores prediction and learning gains prediction. Additionally, while previous work on student learning has often used skill/knowledge components identified by domain experts, we incorporated an automatic skill discovery method (SK), which includes a nonparametric prior over the exercise-skill assignments, to all three models. Thus, we explored a total of six models: BKT, BKT+SK, IBKT, IBKT+SK, LSTM, and LSTM+SK. Two training datasets were employed, one was collected from a natural language physics intelligent tutoring system named Cordillera, and the other was from a standard probability intelligent tutoring system named Pyrenees. Overall, our results showed that BKT and BKT+SK outperformed the others on predicting post-test scores, whereas LSTM and LSTM+SK achieved the highest accuracy, F1-measure, and area under the ROC curve (AUC) on predicting learning gains. Furthermore, we demonstrated that by combining SK with the BKT model, BKT+SK could reliably predict post-test scoresmore »using only the earliest 50% of the entire training sequences. For learning gain early prediction, using the earliest 70% of the entire sequences, LSTM can deliver a comparable prediction as using the entire training sequences. The findings yield a learning environment that can foretell students’ performance and learning gains early, and can render adaptive pedagogical strategy accordingly.« less