skip to main content


Search for: All records

Award ID contains: 2003129

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 15, 2025
  2. Free, publicly-accessible full text available May 14, 2025
  3. Free, publicly-accessible full text available November 26, 2024
  4. Free, publicly-accessible full text available August 10, 2024
  5. Free, publicly-accessible full text available August 10, 2024
  6. Free, publicly-accessible full text available August 9, 2024
  7. Minimizing risk with fairness constraints is one of the popular approaches to learning a fair classifier. Recent works showed that this approach yields an unfair classifier if the training set is corrupted. In this work, we study the minimum amount of data corruption required for a successful flipping attack. First, we find lower/upper bounds on this quantity and show that these bounds are tight when the target model is the unique unconstrained risk minimizer. Second, we propose a computationally efficient data poisoning attack algorithm that can compromise the performance of fair learning algorithms. 
    more » « less