Fairness and robustness are two important notions of learning models. Fairness ensures that models do not disproportionately harm (or benefit) some groups over others, while robustness measures the models' resilience against small input perturbations. While equally important properties, this paper illustrates a dichotomy between fairness and robustness, and analyzes when striving for fairness decreases the model robustness to adversarial samples. The reported analysis sheds light on the factors causing such contrasting behavior, suggesting that distance to the decision boundary across groups as a key factor. Experiments on non-linear models and different architectures validate the theoretical findings. In addition to the theoretical analysis, the paper also proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
more »
« less
H-nobs: Achieving Certified Fairness and Robustness in Distributed Learning on Heterogeneous Datasets
Fairness and robustness are two important goals in the design of modern distributed learning systems. Despite a few prior works attempting to achieve both fairness and robustness, some key aspects of this direction remain underexplored. In this paper, we try to answer three largely unnoticed and unaddressed questions that are of paramount significance to this topic: (i) What makes jointly satisfying fairness and robustness difficult? (ii) Is it possible to establish theoretical guarantee for the dual property of fairness and robustness? (iii) How much does fairness have to sacrifice at the expense of robustness being incorporated into the system? To address these questions, we first identify data heterogeneity as the key difficulty of combining fairness and robustness. Accordingly, we propose a fair and robust framework called H-nobs which can offer certified fairness and robustness through the adoption of two key components, a fairness-promoting objective function and a simple robust aggregation scheme called norm-based screening (NBS). We explain in detail why NBS is the suitable scheme in our algorithm in contrast to other robust aggregation measures. In addition, we derive three convergence theorems for H-nobs in cases of the learning model being nonconvex, convex, and strongly convex respectively, which provide theoretical guarantees for both fairness and robustness. Further, we empirically investigate the influence of the robust mechanism (NBS) on the fairness performance of H-nobs, the very first attempt of such exploration.
more »
« less
- Award ID(s):
- 2231209
- PAR ID:
- 10481825
- Publisher / Repository:
- NeurIPS 2023
- Date Published:
- Journal Name:
- Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023)
- Format(s):
- Medium: X
- Location:
- New Orleans
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL) with respect to sensitive attributes, such as gender, race, etc. Existing works often focus on either global fairness (overall disparity of the model across all clients) or local fairness (disparity of the model at each client), without always considering their trade-offs. There is a lack of understanding regarding the interplay between global and local fairness in FL, particularly under data heterogeneity, and if and when one implies the other. To address this gap, we leverage a body of work in information theory called partial information decomposition (PID), which first identifies three sources of unfairness in FL, namely, Unique Disparity, Redundant Disparity, and Masked Disparity. We demonstrate how these three disparities contribute to global and local fairness using canonical examples. This decomposition helps us derive fundamental limits on the trade-off between global and local fairness, highlighting where they agree or disagree. We introduce the Accuracy and Global-Local Fairness Optimality Problem (AGLFOP), a convex optimization that defines the theoretical limits of accuracy and fairness trade-offs, identifying the best possible performance any FL strategy can attain given a dataset and client distribution. We also present experimental results on synthetic datasets and the ADULT dataset to support our theoretical findings.more » « less
-
null (Ed.)In this paper we propose and analyze a finite difference numerical scheme for the Poisson-Nernst-Planck equation (PNP) system. To understand the energy structure of the PNP model, we make use of the Energetic Variational Approach (EnVarA), so that the PNP system could be reformulated as a non-constant mobility H − 1 H^{-1} gradient flow, with singular logarithmic energy potentials involved. To ensure the unique solvability and energy stability, the mobility function is explicitly treated, while both the logarithmic and the electric potential diffusion terms are treated implicitly, due to the convex nature of these two energy functional parts. The positivity-preserving property for both concentrations, n n and p p , is established at a theoretical level. This is based on the subtle fact that the singular nature of the logarithmic term around the value of 0 0 prevents the numerical solution reaching the singular value, so that the numerical scheme is always well-defined. In addition, an optimal rate convergence analysis is provided in this work, in which many highly non-standard estimates have to be involved, due to the nonlinear parabolic coefficients. The higher order asymptotic expansion (up to third order temporal accuracy and fourth order spatial accuracy), the rough error estimate (to establish the ℓ ∞ \ell ^\infty bound for n n and p p ), and the refined error estimate have to be carried out to accomplish such a convergence result. In our knowledge, this work will be the first to combine the following three theoretical properties for a numerical scheme for the PNP system: (i) unique solvability and positivity, (ii) energy stability, and (iii) optimal rate convergence. A few numerical results are also presented in this article, which demonstrates the robustness of the proposed numerical scheme.more » « less
-
Federated Learning (FL) enables multiple clients to collaboratively train a machine learning model while keeping their data private, eliminating the need for data sharing. Two common approaches to secure aggregation (SA) in FL are the single-aggregator and multiple-aggregator models. This work focuses on improving the multiple-aggregator model. Existing multiple-aggregator protocols such as Prio (NSDI 2017), Prio+ (SCN 2022), Elsa (S&P 2023) either offer robustness only in the presence of semi-honest servers or provide security without robustness and are limited to two aggregators. We introduce Mario, the first multipleaggregator Secure Aggregation protocol that is both secure and robust in a malicious setting. Similar to prior work of Prio and Prio+, Mario provides secure aggregation in a setup of n servers and m clients. Unlike previous work, Mario removes the assumption of semi-honest servers, and provides a complete protocol with robustness under malicious clients and malicious servers. Our implementation shows that Mario is 3.40× and 283.4× faster than Elsa and Prio+, respecitivelymore » « less
-
Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.more » « less
An official website of the United States government
