skip to main content


Title: Explainable Recommendations in a Personalized Programming Practice System
This paper contributes to the research on explainable educational recommendations by investigating explainable recommendations in the context of personalized practice system for introductory Java programming. We present the design of two types of explanations to justify recommendation of next learning activity to practice. The value of these explainable recommendations was assessed in a semester-long classroom study. The paper analyses the observed impact of explainable recommendations on various aspects of student behavior and performance.  more » « less
Award ID(s):
1740775 1822752
NSF-PAR ID:
10300395
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Lecture notes in computer science
Volume:
12748
ISSN:
1611-3349
Page Range / eLocation ID:
64-76
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Personalized recommendation of learning content is one of the most frequently cited benefits of personalized online learning. It is expected that with personalized content recommendation students will be able to build their own unique and optimal learning paths and to achieve course goals in the most optimal way. However, in many practical cases students search for learning content not to expand their knowledge, but to address problems encountered in the learning process, such as failures to solve a problem. In these cases, students could be better assisted by remedial recommendations focused on content that could help in resolving current problems. This paper presents a transparent and explainable interface for remedial recommendations in an online programming practice system. The interface was implemented to support SQL programming practice and evaluated in the context of a large database course. The paper summarizes the insights obtained from the study and discusses future work on remedial recommendations. 
    more » « less
  2. This paper reports our recent practice of recommending articles to cold-start users at Tencent. Transferring knowledge from information-rich domains to help user modeling is an effective way to address the user-side cold-start problem. Our previous work demonstrated that general-purpose user embeddings based on mobile app usage helped article recommendations. However, high-dimensional embeddings are cumbersome for online usage, thus limiting the adoption. On the other hand, user clustering, which partitions users into several groups, can provide a lightweight, online-friendly, and explainable way to help recommendations. Effective user clustering for article recommendations based on mobile app usage faces unique challenges, including (1) the gap between an active user’s behavior of mobile app usage and article reading, and (2) the gap between mobile app usage patterns of active and cold-start users. To address the challenges, we propose a tailored Dual Alignment User Clustering (DAUC) model, which applies a sample-wise contrastive alignment to liminate the gap between active users’ mobile app usage and article reading behavior, and a distribution-wise adversarial alignment to eliminate the gap between active users’ and cold-start users’ app usage behavior. With DAUC, cold-start recommendation-optimized user clustering based on mobile app usage can be achieved. On top of the user clusters, we further build candidate generation strategies, real-time features, and corresponding ranking models without much engineering difficulty. Both online and offline experiments demonstrate the effectiveness of our work. 
    more » « less
  3. null (Ed.)
    Recent work in recommender systems has emphasized the importance of fairness, with a particular interest in bias and transparency, in addition to predictive accuracy. In this paper, we focus on the state of the art pairwise ranking model, Bayesian Personalized Ranking (BPR), which has previously been found to outperform pointwise models in predictive accuracy, while also being able to handle implicit feedback. Specifically, we address two limitations of BPR: (1) BPR is a black box model that does not explain its outputs, thus limiting the user's trust in the recommendations, and the analyst's ability to scrutinize a model's outputs; and (2) BPR is vulnerable to exposure bias due to the data being Missing Not At Random (MNAR). This exposure bias usually translates into an unfairness against the least popular items because they risk being under-exposed by the recommender system. In this work, we first propose a novel explainable loss function and a corresponding Matrix Factorization-based model called Explainable Bayesian Personalized Ranking (EBPR) that generates recommendations along with item-based explanations. Then, we theoretically quantify additional exposure bias resulting from the explainability, and use it as a basis to propose an unbiased estimator for the ideal EBPR loss. The result is a ranking model that aptly captures both debiased and explainable user preferences. Finally, we perform an empirical study on three real-world datasets that demonstrate the advantages of our proposed models. 
    more » « less
  4. A variety of systems have been proposed to assist users in detecting machine learning (ML) fairness issues. These systems approach bias reduction from a number of perspectives, including recommender systems, exploratory tools, and dashboards. In this paper, we seek to inform the design of these systems by examining how individuals make sense of fairness issues as they use different de-biasing affordances. In particular, we consider the tension between de-biasing recommendations which are quick but may lack nuance and ”what-if” style exploration which is time consuming but may lead to deeper understanding and transferable insights. Using logs, think-aloud data, and semi-structured interviews we find that exploratory systems promote a rich pattern of hypothesis generation and testing, while recommendations deliver quick answers which satisfy participants at the cost of reduced information exposure. We highlight design requirements and trade-offs in the design of ML fairness systems to promote accurate and explainable assessments. 
    more » « less
  5. Website fingerprinting (WF) attacks allow an adversary to associate a website with the encrypted traffic patterns produced when accessing it, thus threatening to destroy the client-server unlinkability promised by anonymous communication networks. Explainable WF is an open problem in which we need to improve our understanding of (1) the machine learning models used to conduct WF attacks; and (2) the WF datasets used as inputs to those models. This paper focuses on explainable datasets; that is, we develop an alternative to the standard practice of gathering low-quality WF datasets using synthetic browsers in large networks without controlling for natural network variability. In particular, we demonstrate how network simulation can be used to produce explainable WF datasets by leveraging the simulator's high degree of control over network operation. Through a detailed investigation of the effect of network variability on WF performance, we find that: (1) training and testing WF attacks in networks with distinct levels of congestion increases the false-positive rate by as much as 200%; (2) augmenting the WF attacks by training them across several networks with varying degrees of congestion decreases the false-positive rate by as much as 83%; and (3) WF classifiers trained on completely simulated data can achieve greater than 80% accuracy when applied to the real world.

     
    more » « less