skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning causality with graphs
Abstract Recent years have witnessed a rocketing growth of machine learning methods on graph data, especially those powered by effective neural networks. Despite their success in different real‐world scenarios, the majority of these methods on graphs only focus on predictive or descriptive tasks, but lack consideration of causality. Causal inference can reveal the causality inside data, promote human understanding of the learning process and model prediction, and serve as a significant component of artificial intelligence (AI). An important problem in causal inference is causal effect estimation, which aims to estimate the causal effects of a certain treatment (e.g., prescription of medicine) on an outcome (e.g., cure of disease) at an individual level (e.g., each patient) or a population level (e.g., a group of patients). In this paper, we introduce the background of causal effect estimation from observational data, envision the challenges of causal effect estimation with graphs, and then summarize representative approaches of causal effect estimation with graphs in recent years. Furthermore, we provide some insights for future research directions in related area. Link to video abstract:https://youtu.be/BpDPOOqw‐ns  more » « less
Award ID(s):
2006844 2144209 2154962
PAR ID:
10443066
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
AI Magazine
Volume:
43
Issue:
4
ISSN:
0738-4602
Page Range / eLocation ID:
p. 365-375
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Randomization inference is a powerful tool in early phase vaccine trials when estimating the causal effect of a regimen against a placebo or another regimen. Randomization-based inference often focuses on testing either Fisher’s sharp null hypothesis of no treatment effect for any participant or Neyman’s weak null hypothesis of no sample average treatment effect. Many recent efforts have explored conducting exact randomization-based inference for other summaries of the treatment effect profile, for instance, quantiles of the treatment effect distribution function. In this article, we systematically review methods that conduct exact, randomization-based inference for quantiles of individual treatment effects (ITEs) and extend some results to a special case where naïve participants are expected not to exhibit responses to highly specific endpoints. These methods are suitable for completely randomized trials, stratified completely randomized trials, and a matched study comparing two non-randomized arms from possibly different trials. We evaluate the usefulness of these methods using synthetic data in simulation studies. Finally, we apply these methods to HIV Vaccine Trials Network Study 086 (HVTN 086) and HVTN 205 and showcase a wide range of application scenarios of the methods.Rcode that replicates all analyses in this article can be found in first author’s GitHub page athttps://github.com/Zhe-Chen-1999/ITE-Inference. 
    more » « less
  2. Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, such as nodes or edges, that the target GNN relies upon for making predictions. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph, which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: Several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and failing to provide consistent explanations. Applying them to explain weakly performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causality perspective. Two typical reasons for spurious explanations are identified: confounding effect of latent variables like distribution shift and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations,we propose a new explanation framework with an auxiliary alignment loss, which is theoretically proven to be optimizing a more faithful explanation objective intrinsically. Concretely for this alignment loss, a set of different perspectives are explored: anchor-based alignment, distributional alignment based on Gaussian mixture models, mutual-information-based alignment, and so on. A comprehensive study is conducted both on the effectiveness of this new framework in terms of explanation faithfulness/consistency and on the advantages of these variants. For our codes, please refer to the following URL link:https://github.com/TianxiangZhao/GraphNNExplanation 
    more » « less
  3. Causality lays the foundation for the trajectory of our world. Causal inference (CI), which aims to infer intrinsic causal relations among variables of interest, has emerged as a crucial research topic. Nevertheless, the lack of observation of important variables (e.g., confounders, mediators, exogenous variables, etc.) severely compromises the reliability of CI methods. The issue may arise from the inherent difficulty in measuring the variables. Additionally, in observational studies where variables are passively recorded, certain covariates might be inadvertently omitted by the experimenter. Depending on the type of unobserved variables and the specific CI task, various consequences can be incurred if these latent variables are carelessly handled, such as biased estimation of causal effects, incomplete understanding of causal mechanisms, lack of individual-level causal consideration, etc. In this survey, we provide a comprehensive review of recent developments in CI with latent variables. We start by discussing traditional CI techniques when variables of interest are assumed to be fully observed. Afterward, under the taxonomy of circumvention and inference-based methods, we provide an in-depth discussion of various CI strategies to handle latent variables, covering the tasks of causal effect estimation, mediation analysis, counterfactual reasoning, and causal discovery. Furthermore, we generalize the discussion to graph data where interference among units may exist. Finally, we offer fresh aspects for further advancement of CI with latent variables, especially new opportunities in the era of large language models (LLMs). 
    more » « less
  4. null (Ed.)
    One fundamental problem in causality learning is to estimate the causal effects of one or multiple treatments (e.g., medicines in the prescription) on an important outcome (e.g., cure of a disease). One major challenge of causal effect estimation is the existence of unobserved confounders -- the unobserved variables that affect both the treatments and the outcome. Recent studies have shown that by modeling how instances are assigned with different treatments together, the patterns of unobserved confounders can be captured through their learned latent representations. However, the interpretability of the representations in these works is limited. In this paper, we focus on the multi-cause effect estimation problem from a new perspective by learning disentangled representations of confounders. The disentangled representations not only facilitate the treatment effect estimation but also strengthen the understanding of causality learning process. Experimental results on both synthetic and real-world datasets show the superiority of our proposed framework from different aspects. 
    more » « less
  5. Abstract This paper introduces an innovative and streamlined design of a robot, resembling a bicycle, created to effectively inspect a wide range of ferromagnetic structures, even those with intricate shapes. The key highlight of this robot lies in its mechanical simplicity coupled with remarkable agility. The locomotion strategy hinges on the arrangement of two magnetic wheels in a configuration akin to a bicycle, augmented by two independent steering actuators. This configuration grants the robot the exceptional ability to move in multiple directions. Moreover, the robot employs a reciprocating mechanism that allows it to alter its shape, thereby surmounting obstacles effortlessly. An inherent trait of the robot is its innate adaptability to uneven and intricate surfaces on steel structures, facilitated by a dynamic joint. To underscore its practicality, the robot's application is demonstrated through the utilization of an ultrasonic sensor for gauging steel thickness, coupled with a pragmatic deployment mechanism. By integrating a defect detection model based on deep learning, the robot showcases its proficiency in automatically identifying and pinpointing areas of rust on steel surfaces. The paper undertakes a thorough analysis, encompassing robot kinematics, adhesive force, potential sliding and turn‐over scenarios, and motor power requirements. These analyses collectively validate the stability and robustness of the proposed design. Notably, the theoretical calculations established in this study serve as a valuable blueprint for developing future robots tailored for climbing steel structures. To enhance its inspection capabilities, the robot is equipped with a camera that employs deep learning algorithms to detect rust visually. The paper substantiates its claims with empirical evidence, sharing results from extensive experiments and real‐world deployments on diverse steel bridges, situated in both Nevada and Georgia. These tests comprehensively affirm the robot's proficiency in adhering to surfaces, navigating challenging terrains, and executing thorough inspections. A comprehensive visual representation of the robot's trials and field deployments is presented in videos accessible at the following links:https://youtu.be/Qdh1oz_oxiQ andhttps://youtu.be/vFFq79O49dM. 
    more » « less