skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: What is social structural explanation? A causal account
Abstract Social scientists appeal to various “structures” in their explanations including public policies, economic systems, and social hierarchies. Significant debate surrounds the explanatory relevance of these factors for various outcomes such as health, behavioral, and economic patterns. This paper provides a causal account of social structural explanation that is motivated by Haslanger (2016). This account suggests that social structure can be explanatory in virtue of operating as a causal constraint, which is a causal factor with unique characteristics. A novel causal framework is provided for understanding these explanations–this framework addresses puzzles regarding the mysterious causal influence of social structure, how to understand its relation to individual choice, and what makes it the main explanatory (and causally responsible) factor for various outcomes.  more » « less
Award ID(s):
1945647
PAR ID:
10391577
Author(s) / Creator(s):
 
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Noûs
Volume:
58
Issue:
1
ISSN:
0029-4624
Format(s):
Medium: X Size: p. 163-179
Size(s):
p. 163-179
Sponsoring Org:
National Science Foundation
More Like this
  1. Edward N. Zalta & Uri Nodelman (Ed.)
    This entry discusses some accounts of causal explanation developed after approximately 1990. Our focus in this entry is on the following three accounts – Section 1 those that focus on mechanisms and mechanistic explanations, Section 2 the kairetic account of explanation, and Section 3 interventionist accounts of causal explanation. All of these have as their target explanations of why or perhaps how some phenomenon occurs (in contrast to, say, explanations of what something is, which is generally taken to be non-causal) and they attempt to capture causal explanations that aim at such explananda. Section 4 then takes up some recent proposals having to do with how causal explanations may differ in explanatory depth or goodness. Section 5 discusses some issues having to do with what is distinctive about causal (as opposed to non-causal) explanations. 
    more » « less
  2. Abstract This paper examines constraints and their role in scientific explanation. Common views in the philosophical literature suggest that constraints are non-causal and that they provide non-causal explanations. While much of this work focuses on examples from physics, this paper explores constraints from other fields, including neuroscience, physiology, and the social sciences. I argue that these cases involve constraints that are causal and that provide a unique type of causal explanation. This paper clarifies what it means for a factor to be a constraint, when such constraints are causal, and how they figure in scientific explanation. 
    more » « less
  3. Abstract Humans exist as part of social-ecological systems (SES) in which biological, physical, chemical, economic, political and other social processes are tightly interwoven. Global change within these systems presents an increasingly untenable situation for long-term human security. Further, knowledge that humans possess about ourselves and SES represents a complex amalgamation of individual and collective factors. Because of various evolutionary pressures, people often reject this complex reality in favor of more simplistic perceptions and explanations. This thought paper offers an overview of how and where people acquire knowledge and how that knowledge acquisition process reflects and influences narratives, which subsequently affect efforts to address challenges in SES. We highlight three narratives as examples of constraints on finding ways forward toward a more resilient future. Our focal narratives include tendencies to conflate tame and wicked problems; to posit a false human-nature duality; and to resist the explanatory evidence from biocultural evolution. We then discuss the human cognitive propensity to create narratives to think about how we might intentionally develop narratives that are more appropriate for living in coevolving SES. 
    more » « less
  4. null (Ed.)
    Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications. 
    more » « less
  5. Abstract Survey questionnaires are commonly used by psychologists and social scientists to measure various latent traits of study subjects. Various causal inference methods such as the potential outcome framework and structural equation models have been used to infer causal effects. However, the majority of these methods assume the knowledge of true causal structure, which is unknown for many applications in psychological and social sciences. This calls for alternative causal approaches for analyzing such questionnaire data. Bayesian networks are a promising option as they do not require causal structure to be knowna prioribut learn it objectively from data. Although we have seen some recent successes of using Bayesian networks to discover causality for psychological questionnaire data, their techniques tend to suffer from causal non-identifiability with observational data. In this paper, we propose the use of a state-of-the-art Bayesian network that is proven to be fully identifiable for observational ordinal data. We develop a causal structure learning algorithm based on an asymptotically justified BIC score function, a hill-climbing search strategy, and the bootstrapping technique, which is able to not only identify a unique causal structure but also quantify the associated uncertainty. Using simulation studies, we demonstrate the power of the proposed learning algorithm by comparing it with alternative Bayesian network methods. For illustration, we consider a dataset from a psychological study of the functional relationships among the symptoms of obsessive-compulsive disorder and depression. Without any prior knowledge, the proposed algorithm reveals some plausible causal relationships. This paper is accompanied by a user-friendly open-source R package OrdCD on CRAN. 
    more » « less