skip to main content


Search for: All records

Creators/Authors contains: "Brusilovsky, Peter"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We present the results of a study where we provided students with textual explanations for learning content recommendations along with adaptive navigational support, in the context of a personalized system for practicing Java programming. We evaluated how varying the modality of access (no access vs. on-mouseover vs. on-click) can influence how students interact with the learning platform and work with both recommended and non-recommended content. We found that the persistence of students when solving recommended coding problems is correlated with their learning gain and that specific student-engagement metrics can be supported by the design of adequate navigational support and access to recommendations' explanations. 
    more » « less
    Free, publicly-accessible full text available September 4, 2024
  2. Prediction of student performance in Introductory programming courses can assist struggling students and improve their persistence. On the other hand, it is important for the prediction to be transparent for the instructor and students to effectively utilize the results of this prediction. Explainable Machine Learning models can effectively help students and instructors gain insights into students’ different programming behaviors and problem-solving strategies that can lead to good or poor performance. This study develops an explainable model that predicts students’ performance based on programming assignment submission information. We extract different data-driven features from students’ programming submissions and employ a stacked ensemble model to predict students’ final exam grades. We use SHAP, a game-theory-based framework, to explain the model’s predictions to help the stakeholders understand the impact of different programming behaviors on students’ success. Moreover, we analyze the impact of important features and utilize a combination of descriptive statistics and mixture models to identify different profiles of students based on their problem-solving patterns to bolster explainability. The experimental results suggest that our model significantly outperforms other Machine Learning models, including KNN, SVM, XGBoost, Bagging, Boosting, and Linear regression. Our explainable and transparent model can help explain students’ common problem-solving patterns in relationship with their level of expertise resulting in effective intervention and adaptive support to students. 
    more » « less
    Free, publicly-accessible full text available July 11, 2024
  3. The ability to automatically assess learners' activities is the key to user modeling and personalization in adaptive educational systems.The work presented in this paper opens an opportunity to expand the scope of automated assessment from traditional programming problems to code comprehension tasks where students are requested to explain the critical steps of a program. The ability to automatically assess these self-explanations offers a unique opportunity to understand the current state of student knowledge, recognize possible misconceptions, and provide feedback. Annotated datasets are needed to train Artificial Intelligence/Machine Learning approaches for the automated assessment of student explanations. To answer this need, we present a novel corpus called SelfCode which consists of 1,770 sentence pairs of student and expert self-explanations of Java code examples, along with semantic similarity judgments provided by experts. We also present a baseline automated assessment model that relies on textual features. The corpus is available at the GitHub repository (https://github.com/jeevanchaps/SelfCode). 
    more » « less
    Free, publicly-accessible full text available May 8, 2024
  4. Self-efficacy, or the belief in one's ability to accomplish a task or achieve a goal, can significantly influence the effectiveness of various instructional methods to induce learning gains. The importance of self-efficacy is particularly pronounced in complex subjects like Computer Science, where students with high self-efficacy are more likely to feel confident in their ability to learn and succeed. Conversely, those with low self-efficacy may become discouraged and consider abandoning the field. The work presented here examines the relationship between self-efficacy and students learning computer programming concepts. For this purpose, we conducted a randomized control trial experiment with university-level students who were randomly assigned into two groups: a control group where participants read Java programs accompanied by explanatory texts (a passive strategy) and an experimental group where participants self-explain while interacting through dialogue with an intelligent tutoring system (an interactive strategy). We report here the findings of this experiment with a focus on self-efficacy, its relation to student learning gains (to evaluate the effectiveness, we measure pre/post-test), and other important factors such as prior knowledge or experimental condition/instructional strategies as well as interaction effects 
    more » « less
  5. Hilliger, Isabel ; Muñoz-Merino, Pedro J. ; De Laet, Tinne ; Ortega-Arranz, Alejandro ; Farrell, Tracie (Ed.)
    Studies of technology-enhanced learning (TEL) environments indicated that learner behavior could be affected (positively or negatively) by presenting information about their peer groups, such as peer in-system performance or course grades. Researchers explained these findings by the social comparison theory, competition, or by categorizing them as an impact of gamification features. Although the choice of individual peers is explored considerably in recent TEL research, the effect of learner control on peer-group selection received little attention. This paper attempts to extend prior work on learner-controlled social comparison by studying a novel fine-grained peer group selection interface in a TEL environment for learning Python programming. To achieve this goal, we analyzed system usage logs and questionnaire responses collected from multiple rounds of classroom studies. By observing student actions in selecting and refining their peer comparison cohort, we understand better whom the student perceives as their peers and how this perception changes during the course. We also explored the connection between their peer group choices and their engagement with learning content. Finally, we attempted to associate student choices in peer selection with several dimensions of individual differences. 
    more » « less
  6. Crossley, Scott ; Popescu, Elvira (Ed.)
    We present here a novel instructional resource, called DeepCode, to support deep code comprehension and learning in intro-to-programming courses (CS1 and CS2). DeepCode is a set of instructional code examples which we call a codeset and which was annotated by our team with comments (e.g., explaining the logical steps of the underlying problem being solved) and related instructional questions that can play the role of hints meant to help learners think about and articulate explanations of the code. While DeepCode was designed primarily to serve our larger efforts of developing an intelligent tutoring system (ITS) that fosters the monitoring, assessment, and development of code comprehension skills for students learning to program, the codeset can be used for other purposes such as assessment, problem-solving, and in various other learning activities such as studying worked-out code examples with explanations and code visualizations. We present here the underlying principles, theories, and frameworks behind our design process, the annotation guidelines, and summarize the resulting codeset of 98 annotated Java code examples which include 7,157 lines of code (including comments), 260 logical steps, 260 logical step details, 408 statement level comments, and 590 scaffolding questions. 
    more » « less
  7. Educational data mining research has demonstrated that the large volume of learning data collected by modern e-learning systems could be used to recognize student behavior patterns and group students into cohorts with similar behavior. However, few attempts have been done to connect and compare behavioral patterns with known dimensions of individual differences. To what extent learner behavior is defined by known individual differences? Which of them could be a better predictor of learner engagement and performance? Could we use behavior patterns to build a data-driven model of individual differences that could be more useful for predicting critical outcomes of the learning process than traditional models? Our paper attempts to answer these questions using a large volume of learner data collected in an online practice system. We apply a sequential pattern mining approach to build individual models of learner practice behavior and reveal latent student subgroups that exhibit considerably different practice behavior. Using these models we explored the connections between learner behavior and both, the incoming and outgoing parameters of the learning process. Among incoming parameters we examined traditionally collected individual differences such as self-esteem, gender, and knowledge monitoring skills. We also attempted to bridge the gap between cluster-based behavior pattern models and traditional scale-based models of individual differences by quantifying learner behavior on a latent data-driven scale. Our research shows that this data-driven model of individual differences performs significantly better than traditional models of individual differences in predicting important parameters of the learning process, such as performance and engagement. 
    more » « less
  8. The paper focuses on a new type of interactive learning content for SQL programming - worked examples of SQL code. While worked examples are popular in learning programming, their application for learning SQL is limited. Using a novel tool for presenting interactive worked examples, Database Query Analyzer (DBQA), we performed a large-scale randomized controlled study assessing the value of worked examples as a new type of practice content in a database course. We report the results of the classroom study examining the usage and the impact of DBQA. Among other aspects, we explored the effect of textual step explanations provided by DBQA. 
    more » « less
  9. The paper focuses on a new type of interactive learning content for SQL programming - worked examples of SQL code. While worked examples are popular in learning programming, their application for learning SQL is limited. Using a novel tool for presenting interactive worked examples, Database Query Analyzer (DBQA), we performed a large-scale randomized controlled study assessing the value of worked examples as a new type of practice content in a database course. 
    more » « less
  10. Personalized learning and educational recommender systems are integral parts of modern online education systems. In this context, the problem of recommending the best learning material to students is a perfect example of sequential multi-objective recommendation. Learning material recommenders need to optimize for and balance between multiple goals, such as adapting to student ability, adjusting the learning material difficulty, increasing student knowledge, and serving student interest, at every step of the student learning sequence. However, the obscurity and incompatibility of these objectives pose additional challenges for learning material recommenders. To address these challenges, we propose Proximity-based Educational Recommendation (PEAR), a recommendation framework that suggests a ranked list of problems by approximating and balancing between problem difficulty and student ability. To achieve an accurate approximation of these objectives, PEAR can integrate with any state-of-the-art student and domain knowledge model. As an example of such student and domain knowledge model, we introduce Deep Q-matrix based Knowledge Tracing model (DQKT), and integrate PEAR with it. Rather than static recommendations, this framework dynamically suggests new problems at each step by tracking student knowledge level over time. We use an offline evaluation framework, Robust Evaluation Matrix (REM), to compare PEAR with various baseline recommendation policies under three different student simulators and demonstrate the effectiveness of our proposed model. We experiment with different student trajectory lengths and show that while PEAR can perform better than the baseline policies with fewer data, it is also robust with longer sequence lengths. 
    more » « less