skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Does Obesity Shorten Life? Or is it the Soda? On Non-manipulable Causes
Abstract Non-manipulable factors, such as gender or race have posed conceptual and practical challenges to causal analysts. On the one hand these factors do have consequences, and on the other hand, they do not fit into the experimentalist conception of causation. This paper addresses this challenge in the context of public debates over the health cost of obesity, and offers a new perspective, based on the theory of Structural Causal Models (SCM).  more » « less
Award ID(s):
1704932
PAR ID:
10098083
Author(s) / Creator(s):
Date Published:
Journal Name:
Journal of Causal Inference
Volume:
6
Issue:
2
ISSN:
2193-3685
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Identification and quantitative understanding of factors that influence occupant energy behavior and thermal state during the design phase are critical in supporting effective energy-efficient design. To achieve this, immersive virtual environments (IVEs) have recently shown potential as a tool to simulate occupant energy behaviors and collect context-dependent behavior data for buildings under design. On the other hand, prior models of occupant energy behaviors and thermal states used correlation-based approaches, which failed to capture the underlying causal interactions between the influencing factors and hence were unable to uncover the true causing factors. Therefore, in this study, the authors investigate the applicability of causal inference for identifying the causing factors of occupant/participant energy behavioral intentions and their thermal states in IVE condition and compare those results with the baseline in-situ condition. The energy behavioral intentions here are a proximal antecedent of actual energy behaviors. A set of experiments involving 72 human subjects were performed through the use of a head-mounted device (HMD) in a climate chamber. The subjects were exposed to three different step temperatures (cool, neutral, warm) under an IVE and a baseline in-situ condition. Participants' individual factors, behavioral factors, skin temperatures, virtual experience factors, thermal states (sensation, acceptability, comfort), and energy behavioral intentions were collected during the experiments. Structural causal models were learnt from data using the elicitation method in conjunction with the PC-Stable algorithm. The findings show that the causal inference framework is a potentially effective method for identifying causing factors of thermal states and energy behavioral intentions as well as quantifying their causal effects. In addition, the study shows that in IVE experiments, the participants' virtual experience factors such as their immersion, presence, and cybersickness were not the causing factors of thermal states and energy behavioral intentions. Furthermore, the study suggests that participants' behavioral factors such as their attitudes toward energy conservation and perceived behavioral control to conserve energy were the causing factors of their energy behavioral intentions. Also, the indoor temperature was a causing factor of general thermal sensation and overall skin temperature. The paper also discusses other findings, including discrepancies, limitations of the study, and recommendations for future studies. 
    more » « less
  2. Diffusion probabilistic models (DPMs) have become the state-of-the-art in high-quality image generation. However, DPMs have an arbitrary noisy latent space with no interpretable or controllable semantics. Although there has been significant research effort to improve image sample quality, there is little work on representation-controlled generation using diffusion models. Specifically, causal modeling and controllable counterfactual generation using DPMs is an underexplored area. In this work, we propose CausalDiffAE, a diffusion-based causal representation learning framework to enable counterfactual generation according to a specified causal model. Our key idea is to use an encoder to extract high-level semantically meaningful causal variables from high-dimensional data and model stochastic variation using reverse diffusion. We propose a causal encoding mechanism that maps high-dimensional data to causally related latent factors and parameterize the causal mechanisms among latent factors using neural networks. To enforce the disentanglement of causal variables, we formulate a variational objective and leverage auxiliary label information in a prior to regularize the latent space. We propose a DDIM-based counterfactual generation procedure subject to do-interventions. Finally, to address the limited label supervision scenario, we also study the application of CausalDiffAE when a part of the training data is unlabeled, which also enables granular control over the strength of interventions in generating counterfactuals during inference. We empirically show that CausalDiffAE learns a disentangled latent space and is capable of generating high-quality counterfactual images. 
    more » « less
  3. There is a substantial and ever-growing corpus of evidence and literature exploring the impacts of Artificial intelligence (AI) technologies on society, politics, and humanity as a whole. A separate, parallel body of work has explored existential risks to humanity, including but not limited to that stemming from unaligned Artificial General Intelligence (AGI). In this paper, we problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk by acting as intermediate risk factors, and that this potential is not limited to the unaligned AGI scenario. We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors, magnifying the likelihood of previously identified sources of existential risk. Moreover, future developments in the coming decade hold the potential to significantly exacerbate these risk factors, even in the absence of artificial general intelligence. Our main contribution is a (non-exhaustive) exposition of potential AI risk factors and the causal relationships between them, focusing on how AI can affect power dynamics and information security. This exposition demonstrates that there exist causal pathways from AI systems to existential risks that do not presuppose hypothetical future AI capabilities. 
    more » « less
  4. Measuring conditional dependence is one of the important tasks in statistical inference and is fundamental in causal discovery, feature selection, dimensionality reduction, Bayesian network learning, and others. In this work, we explore the connection between conditional dependence measures induced by distances on a metric space and reproducing kernels associated with a reproducing kernel Hilbert space (RKHS). For certain distance and kernel pairs, we show the distance-based conditional dependence measures to be equivalent to that of kernel-based measures. On the other hand, we also show that some popular kernel conditional dependence measures based on the Hilbert-Schmidt norm of a certain crossconditional covariance operator, do not have a simple distance representation, except in some limiting cases. 
    more » « less
  5. Fitch., T.; Lamm, C.; Leder, H.; Teßmar-Raible, K. (Ed.)
    We make frequent decisions about how to manage our health, yet do so with information that is highly complex or received piecemeal. Causal models can provide guidance about how components of a complex system interact, yet models that provide a complete causal story may be more complex than people can reason about. Prior work has provided mixed insights into our ability to make decisions with causal models, showing that people can use them in novel domains but that they may impede decisions in familiar ones. We examine how tailoring causal information to the question at hand may aid decision making, using simple diagrams with only the relevant causal paths (Experiment 1) or those paths highlighted within a complex causal model (Experiment 2). We find that diagrams tailored to a choice improve decision accuracy over complex diagrams or prior knowledge, providing new evidence for how causal models can aid decisions. 
    more » « less