null
(Ed.)
This content will become publicly available on August 3, 2026
Is Your Explanation Reliable: Confidence-Aware Explanation on Graph Neural Networks
- Award ID(s):
- 2331908
- PAR ID:
- 10632922
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400714542
- Page Range / eLocation ID:
- 3740 to 3751
- Format(s):
- Medium: X
- Location:
- Toronto ON Canada
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)
-
Edward N. Zalta & Uri Nodelman (Ed.)This entry discusses some accounts of causal explanation developed after approximately 1990. Our focus in this entry is on the following three accounts – Section 1 those that focus on mechanisms and mechanistic explanations, Section 2 the kairetic account of explanation, and Section 3 interventionist accounts of causal explanation. All of these have as their target explanations of why or perhaps how some phenomenon occurs (in contrast to, say, explanations of what something is, which is generally taken to be non-causal) and they attempt to capture causal explanations that aim at such explananda. Section 4 then takes up some recent proposals having to do with how causal explanations may differ in explanatory depth or goodness. Section 5 discusses some issues having to do with what is distinctive about causal (as opposed to non-causal) explanations.more » « less
-
null (Ed.)As machine learning methods see greater adoption and implementation in high-stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical. Classical approaches that assess feature importance (e.g., saliency maps) do not explain how and why a particular region of an image is relevant to the prediction. We propose a method that explains the outcome of a classification black-box by gradually exaggerating the semantic effect of a given class. Given a query input to a classifier, our method produces a progressive set of plausible variations of that query, which gradually changes the posterior probability from its original class to its negation. These counter-factually generated samples preserve features unrelated to the classification decision, such that a user can employ our method as a “tuning knob” to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.more » « less
-
Machine learning systems are deployed in domains such as hiring and healthcare, where undesired classifications can have serious ramifications for the user. Thus, there is a rising demand for explainable AI systems which provide actionable steps for lay users to obtain their desired outcome. To meet this need, we propose FACET, the first explanation analytics system which supports a user in interactively refining counterfactual explanations for decisions made by tree ensembles. As FACET's foundation, we design a novel type of counterfactual explanation called the counterfactual region. Unlike traditional counterfactuals, FACET's regions concisely describe portions of the feature space where the desired outcome is guaranteed, regardless of variations in exact feature values. This property, which we coin explanation robustness, is critical for the practical application of counterfactuals. We develop a rich set of novel explanation analytics queries which empower users to identify personalized counterfactual regions that account for their real-world circumstances. To process these queries, we develop a compact high-dimensional counterfactual region index along with index-aware query processing strategies for near real-time explanation analytics. We evaluate FACET against state-of-the-art explanation techniques on eight public benchmark datasets and demonstrate that FACET generates actionable explanations of similar quality in an order of magnitude less time while providing critical robustness guarantees. Finally, we conduct a preliminary user study which suggests that FACET's regions lead to higher user understanding than traditional counterfactuals.more » « less
An official website of the United States government
