Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Learning high-level representations for graphs is crucial for tasks like node classification, where graph pooling aggregates node features to provide a holistic view that enhances predictive performance. Despite numerous methods that have been proposed in this promising and rapidly developing research field, most efforts to generalize the pooling operation to graphs are primarily performance-driven, with fairness issues largely overlooked: i) the process of graph pooling could exacerbate disparities in distribution among various subgroups; ii) the resultant graph structure augmentation may inadvertently strengthen intra-group connectivity, leading to unintended inter-group isolation. To this end, this paper extends the initial effort on fair graph pooling to the development of fair graph neural networks, while also providing a unified framework to collectively address group and individual graph fairness. Our experimental evaluations on multiple datasets demonstrate that the proposed method not only outperforms state-of-the-art baselines in terms of fairness but also achieves comparable predictive performance.more » « lessFree, publicly-accessible full text available April 11, 2026
-
The widespread use of Artificial Intelligence (AI) based decision-making systems has raised a lot of concerns regarding potential discrimination, particularly in domains with high societal impact. Most existing fairness research focused on tackling bias relies heavily on the presence of class labels, an assumption that often mismatches real-world scenarios, which ignores the ubiquity of censored data. Further, existing works regard group fairness and individual fairness as two disparate goals, overlooking their inherent interconnection, i.e., addressing one can degrade the other. This paper proposes a novel unified method that aims to mitigate group unfairness under censorship while curbing the amplification of individual unfairness when enforcing group fairness constraints. Specifically, our introduced ranking algorithm optimizes individual fairness within the bounds of group fairness, uniquely accounting for censored information. Evaluation across four benchmark tasks confirms the effectiveness of our method in quantifying and mitigating both fairness dimensions in the face of censored data.more » « lessFree, publicly-accessible full text available October 16, 2025
-
Free, publicly-accessible full text available December 9, 2025
-
Free, publicly-accessible full text available December 15, 2025
-
Free, publicly-accessible full text available October 21, 2025
-
Graph Neural Networks (GNNs) have demonstrated remarkable capabilities across various domains. Despite the successes of GNN deployment, their utilization often reflects societal biases, which critically hinder their adoption in high-stake decision-making scenarios such as online clinical diagnosis, financial crediting, etc. Numerous efforts have been made to develop fair GNNs but they typically concentrate on either individual or group fairness, overlooking the intricate interplay between the two, resulting in the enhancement of one, usually at the cost of the other. In addition, existing individual fairness approaches using a ranking perspective fail to identify discrimination in the ranking. This paper introduces two innovative notions dealing with individual graph fairness and group-aware individual graph fairness, aiming to more accurately measure individual and group biases. Our Group Equality Individual Fairness (GEIF) framework is designed to achieve individual fairness while equalizing the level of individual fairness among subgroups. Preliminary experiments on several real-world graph datasets demonstrate that GEIF outperforms state-of-the-art methods by a significant margin in terms of individual fairness, group fairness, and utility performance.more » « lessFree, publicly-accessible full text available October 16, 2025
-
Abstract Understanding and correcting algorithmic bias in artificial intelligence (AI) has become increasingly important, leading to a surge in research on AI fairness within both the AI community and broader society. Traditionally, this research operates within the constrained supervised learning paradigm, assuming the presence of class labels, independent and identically distributed (IID) data, and batch‐based learning necessitating the simultaneous availability of all training data. However, in practice, class labels may be absent due to censoring, data is often represented using non‐IID graph structures that capture connections among individual units, and data can arrive and evolve over time. These prevalent real‐world data representations limit the applicability of existing fairness literature, which typically addresses fairness in static and tabular supervised learning settings. This paper reviews recent advances in AI fairness aimed at bridging these gaps for practical deployment in real‐world scenarios. Additionally, opportunities are envisioned by highlighting the limitations and significant potential for real applications.more » « less
-
Free, publicly-accessible full text available October 1, 2025
-
Large Language Models (LLMs) have demonstrated remarkable success across various domains. However, despite their promising performance in numerous real-world applications, most of these algorithms lack fairness considerations. Consequently, they may lead to discriminatory outcomes against certain communities, particularly marginalized populations, prompting extensive study in fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in traditional machine learning, entails exclusive backgrounds, taxonomies, and fulfillment techniques. To this end, this survey presents a comprehensive overview of recent advances in the existing literature concerning fair LLMs. Specifically, a brief introduction to LLMs is provided, followed by an analysis of factors contributing to bias in LLMs. Additionally, the concept of fairness in LLMs is discussed categorically, summarizing metrics for evaluating bias in LLMs and existing algorithms for promoting fairness. Furthermore, resources for evaluating bias in LLMs, including toolkits and datasets, are summarized. Finally, existing research challenges and open questions are discussed.more » « lessFree, publicly-accessible full text available July 24, 2025
-
Free, publicly-accessible full text available October 1, 2025