skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Explainable Graph Neural Networks for Power Grid Fault Detection
This paper proposes the application of explanation methods to enhance the interpretability of graph neural network (GNN) models in fault location for power grids. GNN models have exhibited remarkable precision in utilizing phasor data from various locations around the grid and integrating the system’s topology, an advantage rarely harnessed by alternative machine learning techniques. This capability makes GNNs highly effective in identifying fault occurrences in power grids. Despite their greater performance, these models can encounter criticism for their “black box” nature, which conceals the reasoning behind their predictions. Lack of transparency significantly hinders power utility operations, as interpretability is crucial to building trust, accountability, and actionable insights. This research presents a comprehensive framework that systematically evaluates state-of-the-art explanation strategies, representing the first use of such a framework for Graph Neural Network models for defect location detection. By assessing the strengths and weaknesses of different explanatory methods, it identifies and recommends the most effective strategies for clarifying the decision-making processes of GNN models. These recommendations aim to improve the transparency of fault predictions, allowing utility providers to better understand and trust the models’ output. The proposed framework not only enhances the practical usability of GNN-based systems but also contributes to advancing their adoption in critical power grid applications.  more » « less
Award ID(s):
2145571
PAR ID:
10661422
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Access
Volume:
13
ISSN:
2169-3536
Page Range / eLocation ID:
129520 to 129533
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Transparency and accountability have become major concerns for black-box machine learning (ML) models. Proper explanations for the model behavior increase model transparency and help researchers develop more accountable models. Graph neural networks (GNN) have recently shown superior performance in many graph ML problems than traditional methods, and explaining them has attracted increased interest. However, GNN explanation for link prediction (LP) is lacking in the literature. LP is an essential GNN task and corresponds to web applications like recommendation and sponsored search on web. Given existing GNN explanation methods only address node/graph-level tasks, we propose Path-based GNN Explanation for heterogeneous Link prediction (PaGE-Link) that generates explanations with connection interpretability, enjoys model scalability, and handles graph heterogeneity. Qualitatively, PaGE-Link can generate explanations as paths connecting a node pair, which naturally captures connections between the two nodes and easily transfer to human-interpretable explanations. Quantitatively, explanations generated by PaGE-Link improve AUC for recommendation on citation and user-item graphs by 9 - 35% and are chosen as better by 78.79% of responses in human evaluation. 
    more » « less
  2. Graph Neural Networks (GNNs) have shown satisfying performance in various graph analytical problems. Hence, they have become the de facto solution in a variety of decision-making scenarios. However, GNNs could yield biased results against certain demographic subgroups. Some recent works have empirically shown that the biased structure of the input network is a significant source of bias for GNNs. Nevertheless, no studies have systematically scrutinized which part of the input network structure leads to biased predictions for any given node. The low transparency on how the structure of the input network influences the bias in GNN outcome largely limits the safe adoption of GNNs in various decision-critical scenarios. In this paper, we study a novel research problem of structural explanation of bias in GNNs. Specifically, we propose a novel post-hoc explanation framework to identify two edge sets that can maximally account for the exhibited bias and maximally contribute to the fairness level of the GNN prediction for any given node, respectively. Such explanations not only provide a comprehensive understanding of bias/fairness of GNN predictions but also have practical significance in building an effective yet fair GNN model. Extensive experiments on real-world datasets validate the effectiveness of the proposed framework towards delivering effective structural explanations for the bias of GNNs. Open-source code can be found at https://github.com/yushundong/REFEREE. 
    more » « less
  3. With the increasing popularity of Graph Neural Networks (GNNs) for predictive tasks on graph structured data, research on their explainability is becoming more critical and achieving significant progress. Although many methods are proposed to explain the predictions of GNNs, their focus is mainly on “how to generate explanations.” However, other important research questions like “whether the GNN explanations are inaccurate,” “what if the explanations are inaccurate,” and “how to adjust the model to generate more accurate explanations” have gained little attention. Our previous GNN Explanation Supervision (GNES) framework demonstrated effectiveness on improving the reasonability of the local explanation while still keep or even improve the backbone GNNs model performance. In many applications instead of per sample explanations, we need to find global explanations which are reasonable and faithful to the domain data. Simply learning to explain GNNs locally is not an optimal solution to a global understanding of the model. To improve the explainability power of the GNES framework, we propose the Global GNN Explanation Supervision (GGNES) technique which uses a basic trained GNN and a global extension of the loss function used in the GNES framework. This GNN creates local explanations which are fed to a Global Logic-based GNN Explainer, an existing technique that can learn the global Explanation in terms of a logic formula. These two frameworks are then trained iteratively to generate reasonable global explanations. Extensive experiments demonstrate the effectiveness of the proposed model on improving the global explanations while keeping the performance similar or even increase the model prediction power. 
    more » « less
  4. Graph rationales are representative subgraph structures that best explain and support the graph neural network (GNN) predictions. Graph rationalization involves the joint identification of these subgraphs during GNN training, resulting in improved interpretability and generalization. GNN is widely used for node-level tasks such as paper classification and graph-level tasks such as molecular property prediction. However, on both levels, little attention has been given to GNN rationalization and the lack of training examples makes it difficult to identify the optimal graph rationales. In this work, we address the problem by proposing a unified data augmentation framework with two novel operations on environment subgraphs to rationalize GNN prediction. We define the environment subgraph as the remaining subgraph after rationale identification and separation. The framework efficiently performs rationale–environment separation in therepresentation spacefor a node’s neighborhood graph or a graph’s complete structure to avoid the high complexity of explicit graph decoding and encoding. We conduct experiments on 17 datasets spanning node classification, graph classification, and graph regression. Results demonstrate that our framework is effective and efficient in rationalizing and enhancing GNNs for different levels of tasks on graphs. 
    more » « less
  5. DC microgrids incorporate several converters for distributed energy resources connected to different passive and active loads. The complex interactions between the converters and components and their potential failures can significantly affect the grids' resilience and health; hence, they must be continually assessed and monitored. This paper presents a machine learning-assisted prognostic health monitoring (PHM) and diagnosis approach, enabling progressive interactions between the converters at multiple nodes to dynamically examine the grid's (or micro-grid's) health in real time. By measuring the resulting impedance at the power converters' terminals at various grid nodes, a neural network-based classifier helps detect the grid's health condition and identify the potential fault-prone zones, along with the type and location of the fault type in the grid topology. For a faulty grid, a Naive Bayes and a support vector machine (SVM)-based classifiers are used to locate and identify the faulty type, respectively. A separate neural network-based regression model predicts the source power delivered and the loads at different terminals in a healthy grid network. The proposed concepts are supported by detailed analysis and simulation results in a simple four-terminal DC microgrid topology and a standard IEEE 5 Bus system. 
    more » « less