skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Assessing Post-hoc Explainability of the BKT Algorithm
As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or static version of the explainable led to increased learning. Our results reveal that learning about an algorithm through an explainable depends on users' educational background. For other contexts, designers of post-hoc explainables must consider their users' educational background to best determine how to empower more informed decision-making with AI-enhanced systems.  more » « less
Award ID(s):
1849984
PAR ID:
10145247
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM/AAAI Conference on Artificial Intelligence, Ethics, and Society
Page Range / eLocation ID:
407 to 413
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Traffic accident anticipation is a vital function of Automated Driving Systems (ADS) in providing a safety-guaranteed driving experience. An accident anticipation model aims to predict accidents promptly and accurately before they occur. Existing Artificial Intelligence (AI) models of accident anticipation lack a human-interpretable explanation of their decision making. Although these models perform well, they remain a black-box to the ADS users who find it to difficult to trust them. To this end, this paper presents a gated recurrent unit (GRU) network that learns spatio-temporal relational features for the early anticipation of traffic accidents from dashcam video data. A post-hoc attention mechanism named Grad-CAM (Gradient-weighted Class Activation Map) is integrated into the network to generate saliency maps as the visual explanation of the accident anticipation decision. An eye tracker captures human eye fixation points for generating human attention maps. The explainability of network-generated saliency maps is evaluated in comparison to human attention maps. Qualitative and quantitative results on a public crash data set confirm that the proposed explainable network can anticipate an accident on average 4.57 s before it occurs, with 94.02% average precision. Various post-hoc attention-based XAI methods are then evaluated and compared. This confirms that the Grad-CAM chosen by this study can generate high-quality, human-interpretable saliency maps (with 1.23 Normalized Scanpath Saliency) for explaining the crash anticipation decision. Importantly, results confirm that the proposed AI model, with a human-inspired design, can outperform humans in accident anticipation. 
    more » « less
  2. Machine learning is routinely used to automate consequential decisions about users in domains such as finance and healthcare, raising concerns of transparency and recourse for negative outcomes. Existing Explainable AI techniques generate a static counterfactual point explanation which recommends changes to a user's instance to obtain a positive outcome. Unfortunately, these recommendations are often difficult or impossible for users to realistically enact. To overcome this, we present FACET, the first interactive robust explanation system which generates personalized counterfactual region explanations. FACET's expressive explanation analytics empower users to explore and compare multiple counterfactual options and develop a personalized actionable plan for obtaining their desired outcome. Visitors to the demonstration will interact with FACET via a new web dashboard for explanations of a loan approval scenario. In doing so, visitors will experience how lay users can easily leverage powerful explanation analytics through visual interactions and displays without the need for a strong technical background. 
    more » « less
  3. Much of lifelong learning is driven by our curiosity to ask ourselves questions about the things around us in everyday life. Unfortunately, we often fail to pursue these questions to acquire new knowledge, resulting in missed opportunities for lifelong learning. We investigated two approaches to technological support for lifelong learning from question-asking in everyday life: an in-situ approach – reflecting and learning in the situatedness of the moment when a question is asked, and a post hoc approach – self-reflecting and learning after the question-asking moment when one is available to reflect. The in-situ approach may enable people to tap into their embodied experience to gain understanding, while a post hoc approach may allow people to allocate greater cognitive and material resources to explore and understand. We implemented two systems embodying each of the two approaches. A study was conducted to compare the use of the two learning support systems in an everyday virtual environment. Results showed that the post hoc approach produces more curiosity questions and reflection than the in-situ approach. We discuss the implications of our results for the design of systems to support lifelong learning. 
    more » « less
  4. null (Ed.)
    This paper considers the use of a post metadata-based approach to identifying intentionally deceptive online content. It presents the use of an inherently explainable artificial intelligence technique, which utilizes machine learning to train an expert system, for this purpose. It considers the role of three factors (textual context, speaker background, and emotion) in fake news detection analysis and evaluates the efficacy of using key factors, but not the inherently subjective processing of post text itself, to identify deceptive online content. This paper presents initial work on a potential deceptive content detection tool and also, through the networks that it presents for this purpose, considers the interrelationships of factors that can be used to determine whether a post is deceptive content or not and their comparative importance. 
    more » « less
  5. While machine learning classifier models become more widely adopted, opaque “black-box” models remain mostly inscrutable for a variety of reasons. Since their applications increasingly involve decisions impacting the lives of humans, there is increasing demand that their predictions be understandable to humans. Of particular interest in eXplainable AI (XAI) is the interpretability of explanations, i.e., that a model’s prediction should be understandable in terms of the input features. One popular approach is LIME, which offers a model-agnostic framework for explaining any classifier. However, questions remain about the limitations and vulnerabilities of such post-hoc explainers. We have built a tool for generating synthetic tabular data sets which enables us to probe the explanation system opportunistically based on its architecture. In this paper, we report on our success in revealing a scenario where LIME’s explanation violates local faithfulness. 
    more » « less