skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learner Modeling Interpretability and Explainability in Intelligent Adaptive Systems
Learner models are used to support the implementation of personalization features in Adaptive Instructional Systems (AISs; e.g., adaptive sequencing of activities, adaptive feedback), which are important aspects of Intelligent Adaptive Systems. With the increased computational power, more advanced methodologies, and more available data, learner models include a variety of Artificial Intelligence techniques. These techniques have different levels of complexity, which influence interpretability and explainability of learner models. Interpretable and explainable learner models can facilitate appropriate use of the learner modeling information in AISs, their adoption, and scalability. This chapter elaborates on the definitions of interpretability and explainability, describes interpretability and explainability levels of different models, elaborates on the levels of explainability to produce needed information for teachers and learners, and discusses implications and future work in this area.  more » « less
Award ID(s):
2229612
PAR ID:
10591739
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Nature Switzerland
Date Published:
Volume:
20
ISBN:
978-3-031-58362-9
Page Range / eLocation ID:
95 to 109
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract With increasing interest in explaining machine learning (ML) models, this paper synthesizes many topics related to ML explainability. We distinguish explainability from interpretability, local from global explainability, and feature importance versus feature relevance. We demonstrate and visualize different explanation methods, how to interpret them, and provide a complete Python package (scikit-explain) to allow future researchers and model developers to explore these explainability methods. The explainability methods include Shapley additive explanations (SHAP), Shapley additive global explanation (SAGE), and accumulated local effects (ALE). Our focus is primarily on Shapley-based techniques, which serve as a unifying framework for various existing methods to enhance model explainability. For example, SHAP unifies methods like local interpretable model-agnostic explanations (LIME) and tree interpreter for local explainability, while SAGE unifies the different variations of permutation importance for global explainability. We provide a short tutorial for explaining ML models using three disparate datasets: a convection-allowing model dataset for severe weather prediction, a nowcasting dataset for subfreezing road surface prediction, and satellite-based data for lightning prediction. In addition, we showcase the adverse effects that correlated features can have on the explainability of a model. Finally, we demonstrate the notion of evaluating model impacts of feature groups instead of individual features. Evaluating the feature groups mitigates the impacts of feature correlations and can provide a more holistic understanding of the model. All code, models, and data used in this study are freely available to accelerate the adoption of machine learning explainability in the atmospheric and other environmental sciences. 
    more » « less
  2. Explainability is essential for AI models, especially in clinical settings where understanding the model’s decisions is crucial. Despite their impressive performance, black-box AI models are unsuitable for clinical use if their operations cannot be explained to clinicians. While deep neural networks (DNNs) represent the forefront of model performance, their explanations are often not easily interpreted by humans. On the other hand, hand-crafted features extracted to represent different aspects of the input data and traditional machine learning models are generally more understandable. However, they often lack the effectiveness of advanced models due to human limitations in feature design. To address this, we propose ExShall-CNN, a novel explainable shallow convolutional neural network for medical image processing. This model improves upon hand-crafted features to maintain human interpretability, ensuring that its decisions are transparent and understandable. We introduce the explainable shallow convolutional neural network (ExShall-CNN), which combines the interpretability of hand-crafted features with the performance of advanced deep convolutional networks like U-Net for medical image segmentation. Built on recent advancements in machine learning, ExShall-CNN incorporates widely used kernels while ensuring transparency, making its decisions visually interpretable by physicians and clinicians. This balanced approach offers both the accuracy of deep learning models and the explainability needed for clinical applications. 
    more » « less
  3. Abstract Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability methods because they do not know which explanation to choose and how to interpret the explanation. Here we address the challenge of using explainability methods by proposing TalkToModel: an interactive dialogue system that explains ML models through natural language conversations. TalkToModel consists of three components: an adaptive dialogue engine that interprets natural language and generates meaningful responses; an execution component that constructs the explanations used in the conversation; and a conversational interface. In real-world evaluations, 73% of healthcare workers agreed they would use TalkToModel over existing systems for understanding a disease prediction model, and 85% of ML professionals agreed TalkToModel was easier to use, demonstrating that TalkToModel is highly effective for model explainability. 
    more » « less
  4. Co-creative proccesses between people can be characterized by rich dialogue that carries each person's ideas into the collaborative space. When people co-create an artifact that is both technical and aesthetic, their dialogue reflects the interplay between these two dimensions. However, the dialogue mechanisms that express this interplay and the extent to which they are related to outcomes, such as peer satisfaction, are not well understood. This paper reports on a study of 68 high school learner dyads' textual dialogues as they create music by writing code together in a digital learning environment for musical remixing. We report on a novel dialogue taxonomy built to capture the technical and aesthetic dimensions of learners' collaborative dialogues. We identified dialogue act n-grams (sequences of length 1, 2, or 3) that are present within the corpus and discovered five significant n-gram predictors for whether a learner felt satisfied with their partner during the collaboration. The learner was more likely to report higher satisfaction with their partner when the learner frequently acknowledges their partner, exchanges positive feedback with their partner, and their partner proposes an idea and elaborates on the idea. In contrast, the learner is more likely to report lower satisfaction with their partner when the learner frequently accepts back-to-back proposals from their partner and when the partner responds to the learner's statements with positive feedback. This work advances understanding of collaborative dialogue within co-creative domains and suggests dialogue strategies that may be helpful to foster co-creativity as learners collaborate to produce a creative artifact. The findings also suggest important areas of focus for intelligent or adaptive systems that aim to support learners during the co-creative process. 
    more » « less
  5. The increasing adoption of machine learning tools has led to calls for accountability via model interpretability. But what does it mean for a machine learning model to be interpretable by humans, and how can this be assessed? We focus on two definitions of interpretability that have been introduced in the machine learning literature: simulatability (a user's ability to run a model on a given input) and "what if" local explainability (a user's ability to correctly determine a model's prediction under local changes to the input, given knowledge of the model's original prediction). Through a user study with 1,000 participants, we test whether humans perform well on tasks that mimic the definitions of simulatability and "what if" local explainability on models that are typically considered locally interpretable. To track the relative interpretability of models, we employ a simple metric, the runtime operation count on the simulatability task. We find evidence that as the number of operations increases, participant accuracy on the local interpretability tasks decreases. In addition, this evidence is consistent with the common intuition that decision trees and logistic regression models are interpretable and are more interpretable than neural networks. 
    more » « less