skip to main content


Title: Explaining machine learning models with interactive natural language conversations using TalkToModel
Abstract

Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability methods because they do not know which explanation to choose and how to interpret the explanation. Here we address the challenge of using explainability methods by proposing TalkToModel: an interactive dialogue system that explains ML models through natural language conversations. TalkToModel consists of three components: an adaptive dialogue engine that interprets natural language and generates meaningful responses; an execution component that constructs the explanations used in the conversation; and a conversational interface. In real-world evaluations, 73% of healthcare workers agreed they would use TalkToModel over existing systems for understanding a disease prediction model, and 85% of ML professionals agreed TalkToModel was easier to use, demonstrating that TalkToModel is highly effective for model explainability.

 
more » « less
Award ID(s):
2046873
NSF-PAR ID:
10435605
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Machine Intelligence
Volume:
5
Issue:
8
ISSN:
2522-5839
Page Range / eLocation ID:
p. 873-883
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Machine Learning (ML) models are increasingly used to make critical decisions in real-world applications, yet they have become more complex, making them harder to understand. To this end, researchers have proposed several techniques to explain model predictions. However, practitioners struggle to use these explainability techniques because they often do not know which one to choose and how to interpret the results of the explanations. In this work, we address these challenges by introducing TalkToModel: an interactive dialogue system for explaining machine learning models through conversations. TalkToModel comprises 1) a dialogue engine that adapts to any tabular model and dataset, understands language, and generates responses, and 2) an execution component that constructs the explanations. In real-world evaluations with humans, 73% of healthcare workers (e.g., doctors and nurses) agreed they would use TalkToModel over baseline point-and-click systems for explainability in a disease prediction task, and 85% of ML professionals agreed TalkToModel was easier to use for computing explanations. Our findings demonstrate that TalkToModel is more effective for model explainability than existing systems, introducing a new category of explainability tools for practitioners. 
    more » « less
  2. As practitioners increasingly deploy machine learning models in critical domains such as health care, finance, and policy, it becomes vital to ensure that domain experts function effectively alongside these models. Explainability is one way to bridge the gap between human decision-makers and machine learning models. However, most of the existing work on explainability focuses on one-off, static explanations like feature importances or rule lists. These sorts of explanations may not be sufficient for many use cases that require dynamic, continuous discovery from stakeholders. In the literature, few works ask decision-makers about the utility of existing explanations and other desiderata they would like to see in an explanation going forward. In this work, we address this gap and carry out a study where we interview doctors, healthcare professionals, and policymakers about their needs and desires for explanations. Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues. Domain experts wish to treat machine learning models as "another colleague", i.e., one who can be held accountable by asking why they made a particular decision through expressive and accessible natural language interactions. Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations as a starting place for future work. Further, we show why natural language dialogues satisfy these principles and are a desirable way to build interactive explanations. Next, we provide a design of a dialogue system for explainability and discuss the risks, trade-offs, and research opportunities of building these systems. Overall, we hope our work serves as a starting place for researchers and engineers to design interactive explainability systems. 
    more » « less
  3. Abstract

    With increasing interest in explaining machine learning (ML) models, this paper synthesizes many topics related to ML explainability. We distinguish explainability from interpretability, local from global explainability, and feature importance versus feature relevance. We demonstrate and visualize different explanation methods, how to interpret them, and provide a complete Python package (scikit-explain) to allow future researchers and model developers to explore these explainability methods. The explainability methods include Shapley additive explanations (SHAP), Shapley additive global explanation (SAGE), and accumulated local effects (ALE). Our focus is primarily on Shapley-based techniques, which serve as a unifying framework for various existing methods to enhance model explainability. For example, SHAP unifies methods like local interpretable model-agnostic explanations (LIME) and tree interpreter for local explainability, while SAGE unifies the different variations of permutation importance for global explainability. We provide a short tutorial for explaining ML models using three disparate datasets: a convection-allowing model dataset for severe weather prediction, a nowcasting dataset for subfreezing road surface prediction, and satellite-based data for lightning prediction. In addition, we showcase the adverse effects that correlated features can have on the explainability of a model. Finally, we demonstrate the notion of evaluating model impacts of feature groups instead of individual features. Evaluating the feature groups mitigates the impacts of feature correlations and can provide a more holistic understanding of the model. All code, models, and data used in this study are freely available to accelerate the adoption of machine learning explainability in the atmospheric and other environmental sciences.

     
    more » « less
  4. Abstract

    As the pressures on water resources are ever increasing, the organization of complex disparate data and scientific information to inform the actions to protect and enhance the resilience of freshwater resources is key for sustainable development and implementation of integrated water resource management (IWRM). Methodologies supporting IWRM implementation have largely focused on water management and governance, with less attention to evaluation methods of ecologic, economic, and social conditions. To assist in assessing water resource sustainability, the Integrated Hydro‐Environment Assessment Tool (IHEAT) has been developed to create a framework for different disciplines and interests to engage in structured dialogue. The IHEAT builds on the considerable body of knowledge developed around IWRM and seeks to place this information into a single framework that facilitates the cogeneration of knowledge between managers, stakeholders, and the communities affected by management decisions with the understanding that there is a need to merge expert analysis with traditional knowledge and the lived experience of communities. IHEAT merges the driver‐pressure‐state‐impact‐response (DPSIR) framework, the Millennium Ecosystem Assessment's ecosystem services and human well‐being (HWB) framework, sustainability criteria for water resource systems, and water resources indexes and sets of indicators to better understand spatiotemporal interactions between hydrologic, socioeconomic, and ecologic systems and evaluate impacts of disturbances on ecological goods and services and HWB. IHEAT consists of a Conceptual Template (IHEAT‐CT) which provides a systematic framework for assessing basin conditions and guiding indicator selection as well as an Assessment Interface (IHEAT‐AI) for organizing, processing, and assessing analytical results. The IHEAT‐CT, presented herein, is a rapid screening tool that connects water use directly, or through ecosystem goods and services (EGS), to constituents of HWB. Disturbance Templates for eight pressure types, such as land‐use change, climate change, and population growth, are provided to guide practitioners regarding potential changes to landscape elements in the hydrological cycle, impacts on EGS, and societal implications on HWB. The basin screening results in a summary report card illuminating key freshwater ecosystems, the EGS they provide, and potential responses to drivers and pressures acting on the hydrologic system. This screening provides a common understanding by technical and nontechnical parties and provides the foundation for more complex conceptual models should they be required. An indicator list guides the selection of hydrologic, ecologic, economic, and social analytical methods to support IWRM technical input.

     
    more » « less
  5. Abstract

    There has been significant work recently in developing machine learning (ML) models in high energy physics (HEP) for tasks such as classification, simulation, and anomaly detection. Often these models are adapted from those designed for datasets in computer vision or natural language processing, which lack inductive biases suited to HEP data, such as equivariance to its inherent symmetries. Such biases have been shown to make models more performant and interpretable, and reduce the amount of training data needed. To that end, we develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group$$\textrm{SO}^+(3,1)$$SO+(3,1), with a latent space living in the representations of the group. We present our architecture and several experimental results on jets at the LHC and find it outperforms graph and convolutional neural network baseline models on several compression, reconstruction, and anomaly detection metrics. We also demonstrate the advantage of such an equivariant model in analyzing the latent space of the autoencoder, which can improve the explainability of potential anomalies discovered by such ML models.

     
    more » « less