skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On the (Complete) Reasons Behind Decisions
Abstract Recent work has shown that the input-output behavior of some common machine learning classifiers can be captured in symbolic form, allowing one to reason about the behavior of these classifiers using symbolic techniques. This includes explaining decisions, measuring robustness, and proving formal properties of machine learning classifiers by reasoning about the corresponding symbolic classifiers. In this work, we present a theory for unveiling thereasonsbehind the decisions made by Boolean classifiers and study some of its theoretical and practical implications. At the core of our theory is the notion of acomplete reason,which can be viewed as a necessary and sufficient condition for why a decision was made. We show how the complete reason can be used for computing notions such as sufficient reasons (also known as PI-explanations and abductive explanations), how it can be used for determining decision and classifier bias and how it can be used for evaluating counterfactual statements such as “a decision will stick even if ...because ... .” We present a linear-time algorithm for computing the complete reasoning behind a decision, assuming the classifier is represented by a Boolean circuit of appropriate form. We then show how the computed complete reason can be used to answer many queries about a decision in linear or polynomial time. We finally conclude with a case study that illustrates the various notions and techniques we introduced.  more » « less
Award ID(s):
1910317
PAR ID:
10369934
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Journal of Logic, Language and Information
Volume:
32
Issue:
1
ISSN:
0925-8531
Page Range / eLocation ID:
p. 63-88
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically. 
    more » « less
  2. Pham, Tien; Solomon, Latasha; Hohil, Myron E. (Ed.)
    Explainable Artificial Intelligence (XAI) is the capability of explaining the reasoning behind the choices made by the machine learning (ML) algorithm which can help understand and maintain the transparency of the decision-making capability of the ML algorithm. Humans make thousands of decisions every day in their lives. Every decision an individual makes, they can explain the reasons behind why they made the choices that they made. Nonetheless, it is not the same in the case of ML and AI systems. Furthermore, XAI was not wideley researched until suddenly the topic was brought forward and has been one of the most relevant topics in AI for trustworthy and transparent outcomes. XAI tries to provide maximum transparency to a ML algorithm by answering questions about how models effectively came up with the output. ML models with XAI will have the ability to explain the rationale behind the results, understand the weaknesses and strengths the learning models, and be able to see how the models will behave in the future. In this paper, we investigate XAI for algorithmic trustworthiness and transparency. We evaluate XAI using some example use cases and by using SHAP (SHapley Additive exPlanations) library and visualizing the effect of features individually and cumulatively in the prediction process. 
    more » « less
  3. Causal information, from health guidance on diets that prevent disease to financial advice for growing savings, is everywhere. Psychological research has shown that people can readily use causal information to make decisions and choose interventions. However, this work has mainly focused on novel systems rather than everyday domains, such as health and finance. Recent research suggests that in familiar scenarios, causal information can lead to worse decisions than having no information at all, but the mechanism behind this effect is not yet known. We aimed to address this by studying whether people reason differently when they receive causal information and whether the type of reasoning affects decision quality. For a set of decisions about health and personal finance, we used quantitative (e.g., decision accuracy) and qualitative (e.g., free-text descriptions of decision processes) methods to capture decision quality and how people used the provided information. We found that participants given causal information focused on different aspects than did those who did not receive causal information and that reasoning linked to better decisions with no information was associated with worse decisions with causal information. Furthermore, people brought in many aspects of their existing knowledge and preferences, going beyond the conclusions licensed by the provided information. Our findings provide new insights into why decision quality differs systematically between familiar and novel scenarios and suggest directions for future work guiding everyday choices. 
    more » « less
  4. Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts. 
    more » « less
  5. SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the SHAP explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the SHAP computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing SHAP explanations is already intractable for a very simple setting: computing SHAP explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing SHAP over the empirical distribution is #P-hard. 
    more » « less