skip to main content


Title: FACS-GCN: Fairness-Aware Cost-Sensitive Boosting of Graph Convolutional Networks
Graph neural networks (GNNs) have emerged as a powerful tool for modeling graph data due to their ability to learn a concise representation of the data by integrating the node attributes and link information in a principled fashion. However, despite their promise, there are several practical challenges that must be overcome to effectively use them for node classification problems. In particular, current approaches are vulnerable to different kinds of biases inherent in the graph data. First, if the class distribution is imbalanced, then the GNNs' loss function is biased towards classifying the majority class correctly rather than the minority class, which hurts the performance of the latter class. Second, due to homophily effect, the learned representation and subsequent downstream tasks may favor certain demographic groups over others when applied to social network data. To mitigate such biases, we propose a novel framework called Fairness-Aware Cost Sensitive Graph Convolutional Network (FACS-GCN) for classifying nodes in networks with skewed class distributions. Our approach combines a cost-sensitive exponential loss with an adversarial learning component to alleviate the ill-effects of both biases. The framework employs a stagewise additive modeling approach to ensure there is no significant loss in accuracy when imparting fairness into the GNN. Experimental results on 6 benchmark graph data demonstrate the effectiveness of FACS-GCN against comparable baseline methods in terms of promoting fairness while maintaining a high model accuracy on the majority of the datasets.  more » « less
Award ID(s):
1939368
NSF-PAR ID:
10384981
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN)
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. There has been significant progress in improving the performance of graph neural networks (GNNs) through enhancements in graph data, model architecture design, and training strategies. For fairness in graphs, recent studies achieve fair representations and predictions through either graph data pre-processing (e.g., node feature masking, and topology rewiring) or fair training strategies (e.g., regularization, adversarial debiasing, and fair contrastive learning). How to achieve fairness in graphs from the model architecture perspective is less explored. More importantly, GNNs exhibit worse fairness performance compared to multilayer perception since their model architecture (i.e., neighbor aggregation) amplifies biases. To this end, we aim to achieve fairness via a new GNN architecture. We propose Fair Message Passing (FMP) designed within a unified optimization framework for GNNs. Notably, FMP explicitly renders sensitive attribute usage in forward propagation for node classification task using cross-entropy loss without data pre-processing. In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.In this way, FMP scheme can aggregate useful information from neighbors and mitigate bias to achieve better fairness and prediction tradeoff performance. Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets. The code is available at https://github.com/zhimengj0326/FMP.

     
    more » « less
  2. As machine learning becomes more widely adopted across domains, it is critical that researchers and ML engineers think about the inherent biases in the data that may be perpetuated by the model. Recently, many studies have shown that such biases are also imbibed in Graph Neural Network (GNN) models if the input graph is biased, potentially to the disadvantage of underserved and underrepresented communities. In this work, we aim to mitigate the bias learned by GNNs by jointly optimizing two different loss functions: one for the task of link prediction and one for the task of demographic parity. We further implement three different techniques inspired by graph modification approaches: the Global Fairness Optimization (GFO), Constrained Fairness Optimization (CFO), and Fair Edge Weighting (FEW) models. These techniques mimic the effects of changing underlying graph structures within the GNN and offer a greater degree of interpretability over more integrated neural network methods. Our proposed models emulate microscopic or macroscopic edits to the input graph while training GNNs and learn node embeddings that are both accurate and fair under the context of link recommendations. We demonstrate the effectiveness of our approach on four real world datasets and show that we can improve the recommendation fairness by several factors at negligible cost to link prediction accuracy. 
    more » « less
  3. Graphs are widely found in social network analysis and e-commerce, where Graph Neural Networks (GNNs) are the state-of the-art model. GNNs can be biased due to sensitive attributes and network topology. With existing work that learns a fair node representation or adjacency matrix, achieving a strong guarantee of group fairness while preserving prediction accuracy is still challenging, with the fairness-accuracy trade-off remaining obscure to human decision-makers. We first define and analyze a novel upper bound of group fairness to optimize the adjacency matrix for fairness without significantly h arming prediction accuracy. To understand the nuance of fairness-accuracy tradeoff, we further propose macroscopic and microscopic explanation methods to reveal the trade-offs and the space that one can exploit. The macroscopic explanation method is based on stratified sampling and linear programming to deterministically explain the dynamics of the group fairness and prediction accuracy. Driving down to the microscopic level, we propose a path-based explanation that reveals how network topology leads to the tradeoff. On seven graph datasets, we demonstrate the novel upper bound can achieve more efficient fairness-accuracy trade-offs and the intuitiveness of the explanation methods can clearly pinpoint where the trade-off is improved. 
    more » « less
  4. While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level information about sensitive attributes. In this paper, we study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data. We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance. Our method creates a strong defense against inference attacks, while only suffering small loss in task performance. Theoretically, we analyze the effectiveness of our framework against a worst-case adversary, and characterize an inherent trade-off between maximizing predictive accuracy and minimizing information leakage. Experiments across multiple datasets from recommender systems, knowledge graphs and quantum chemistry demonstrate that the proposed approach provides a robust defense across various graph structures and tasks, while producing competitive GNN encoders for downstream tasks. 
    more » « less
  5. Graph neural networks (GNNs) have achieved tremendous success in many graph learning tasks such as node classifica- tion, graph classification and link prediction. For the classifi- cation task, GNNs’ performance often highly depends on the number of labeled nodes and thus could be significantly ham- pered due to the expensive annotation cost. The sparse litera- ture on active learning for GNNs has primarily focused on se- lecting only one sample each iteration, which becomes ineffi- cient for large scale datasets. In this paper, we study the batch active learning setting for GNNs where the learning agent can acquire labels of multiple samples at each time. We formu- late batch active learning as a cooperative multi-agent rein- forcement learning problem and present a novel reinforced batch-mode active learning framework (BIGENE). To avoid the combinatorial explosion of the joint action space, we in- troduce a value decomposition method that factorizes the to- tal Q-value into the average of individual Q-values. More- over, we propose a novel multi-agent Q-network consisting of a graph convolutional network (GCN) component and a gated recurrent unit (GRU) component. The GCN compo- nent takes both the informativeness and inter-dependences between nodes into account and the GRU component enables the agent to consider interactions between selected nodes in the same batch. Experimental results on multiple public datasets demonstrate the effectiveness and efficiency of our proposed method. 
    more » « less