skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Causal Approaches to Explanation
This entry discusses some accounts of causal explanation developed after approximately 1990. Our focus in this entry is on the following three accounts – Section 1 those that focus on mechanisms and mechanistic explanations, Section 2 the kairetic account of explanation, and Section 3 interventionist accounts of causal explanation. All of these have as their target explanations of why or perhaps how some phenomenon occurs (in contrast to, say, explanations of what something is, which is generally taken to be non-causal) and they attempt to capture causal explanations that aim at such explananda. Section 4 then takes up some recent proposals having to do with how causal explanations may differ in explanatory depth or goodness. Section 5 discusses some issues having to do with what is distinctive about causal (as opposed to non-causal) explanations.  more » « less
Award ID(s):
1945647
PAR ID:
10411348
Author(s) / Creator(s):
Editor(s):
Edward N. Zalta & Uri Nodelman
Date Published:
Journal Name:
Stanford encyclopedia of philosophy
ISSN:
1095-5054
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications. 
    more » « less
  2. Abstract This paper examines constraints and their role in scientific explanation. Common views in the philosophical literature suggest that constraints are non-causal and that they provide non-causal explanations. While much of this work focuses on examples from physics, this paper explores constraints from other fields, including neuroscience, physiology, and the social sciences. I argue that these cases involve constraints that are causal and that provide a unique type of causal explanation. This paper clarifies what it means for a factor to be a constraint, when such constraints are causal, and how they figure in scientific explanation. 
    more » « less
  3. Abstract Social scientists appeal to various “structures” in their explanations including public policies, economic systems, and social hierarchies. Significant debate surrounds the explanatory relevance of these factors for various outcomes such as health, behavioral, and economic patterns. This paper provides a causal account of social structural explanation that is motivated by Haslanger (2016). This account suggests that social structure can be explanatory in virtue of operating as a causal constraint, which is a causal factor with unique characteristics. A novel causal framework is provided for understanding these explanations–this framework addresses puzzles regarding the mysterious causal influence of social structure, how to understand its relation to individual choice, and what makes it the main explanatory (and causally responsible) factor for various outcomes. 
    more » « less
  4. Feature attributions attempt to highlight what inputs drive predictive power. Good attributions or explanations are thus those that produce inputs that retain this predictive power; accordingly, evaluations of explanations score their quality of prediction. However, evaluations produce scores better than what appears possible from the values in the explanation for a class of explanations, called encoding explanations. Probing for encoding remains a challenge because there is no general characterization of what gives the extra predictive power. We develop a definition of encoding that identifies this extra predictive power via conditional dependence and show that the definition fits existing examples of encoding. This definition implies, in contrast to encoding explanations, that non-encoding explanations contain all the informative inputs used to produce the explanation, giving them a "what you see is what you get" property, which makes them transparent and simple to use. Next, we prove that existing scores (ROAR, FRESH, EVAL-X) do not rank non-encoding explanations above encoding ones, and develop STRIPE-X which ranks them correctly. After empirically demonstrating the theoretical insights, we use STRIPE-X to show that despite prompting an LLM to produce non-encoding explanations for a sentiment analysis task, the LLM-generated explanations encode. 
    more » « less
  5. Prompted self-explanation, in which learners are induced to explain how they have solved problems, is a powerful instructional technique. Self-explanation can be prompted within learning technology by asking learners to construct their own self-explanations or select explanations from a menu. The menu-based approach has led to the best learning outcomes in the relatively few cases it has been studied in the context of digital learning games, contrary to some self-explanation theory. In a classroom study of 214 5th and 6th graders, in which the students played a digital learning game, we compared three forms of prompted self-explanation: menu-based, scaffolded, and focused (i.e., open-ended text entry, but with a focused prompt). Students in the focused condition learned more than students in the menu-based condition at delayed posttest, with no other learning differences between the conditions. This suggests that focused self-explanations may be especially beneficial for retention and deeper knowledge. 
    more » « less