skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Foundations for Unfairness in Anomaly Detection - Case Studies in Facial Imaging Data
Deep anomaly detection (AD) is perhaps the most controver- sial of data analytic tasks as it identifies entities that are then specifically targeted for further investigation or exclusion. Also controversial is the application of AI to facial imaging data. This work explores the intersection of these two areas to understand two core questions: ”Who” these algorithms are being unfair to and equally important ”Why”. Recent work has shown that deep AD can be unfair to different groups despite being unsupervised with a recent study showing that for portraits of people: men of color are far more likely to be chosen to be outliers. We study the two main categories of AD algorithms: autoencoder-based and single-class-based which effectively try to compress all the instances with those that can not be easily compressed being deemed to be out- liers. We experimentally verify sources of unfairness such as the under-representation of a group (e.g. people of color are relatively rare), spurious group features (e.g. men are often photographed with hats), and group labeling noise (e.g. race is subjective). We conjecture that lack of compressibility is the main foundation and the others cause it but experimen- tal results show otherwise and we present a natural hierarchy amongst them.  more » « less
Award ID(s):
2310481
PAR ID:
10540273
Author(s) / Creator(s):
;
Publisher / Repository:
ACM AAAI
Date Published:
Subject(s) / Keyword(s):
Outlier detection Fairness Anomaly detection
Format(s):
Medium: X
Location:
San Jose
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    A pilot inventory to develop measures of bias and discrimination experienced by engineering doctoral students asked if they have been treated unfairly by their primary advisor, secondary advisor, and other faculty. Analyses of pilot data (n = 250) revealed Women, Students of Color, and sexual minorities perceived experiences of unfair treatment in intricate patterns. Post hoc analyses show that Women experience more incidences of unfair treatment than men. Race/ethnicity identity groups report a different number of unfair treatment incidences, with Students of Color generally reporting more experiences than white students. Being a sexual minority contributed to reporting more incidences of unfair treatment. Unfair treatment from faculty significantly predicted students changing and considering changing research labs when controlling for gender, race/ethnicity, and sexuality. Unfair treatment from faculty significantly impacted engineering identity when controlling for gender, race/ethnicity, sexuality, lab changers, and change considerers. Analyses of pilot data demonstrated the negative impact of unfair treatment on students and their development as engineers. 
    more » « less
  2. Previous findings on people perception show that perceivers are attuned to the social categories of group members, which subsequently influences social judgments. An outstanding question is whether perceivers are also attuned to visual cue variability (e.g., gender typicality). In two studies (n = 165), perceivers viewed 12-person ensembles (500 ms) of varying White men-to-women ratios. Importantly, faces of one gender/sex were morphed to appear either more masculine or more feminine. Consistent with prior work, results indicated that judgments varied by the actual gender/sex ratio. In addition, perceivers' judgments varied as a function of manipulated gender cues. Ensembles composed of masculine, compared to feminine White men, were judged to have more men, higher perceived masculinity, and to be more threatening. Complementary results were found for ensembles composed of feminine, compared to masculine White women. These findings highlight the impact of both social categories and visual phenotypic cue variability on people perception. 
    more » « less
  3. null (Ed.)
    In Autonomous Driving (AD) systems, perception is both security and safety critical. Despite various prior studies on its security issues, all of them only consider attacks on cameraor LiDAR-based AD perception alone. However, production AD systems today predominantly adopt a Multi-Sensor Fusion (MSF) based design, which in principle can be more robust against these attacks under the assumption that not all fusion sources are (or can be) attacked at the same time. In this paper, we present the first study of security issues of MSF-based perception in AD systems. We directly challenge the basic MSF design assumption above by exploring the possibility of attacking all fusion sources simultaneously. This allows us for the first time to understand how much security guarantee MSF can fundamentally provide as a general defense strategy for AD perception. We formulate the attack as an optimization problem to generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it. To systematically generate such a physical-world attack, we propose a novel attack pipeline that addresses two main design challenges: (1) non-differentiable target camera and LiDAR sensing systems, and (2) non-differentiable cell-level aggregated features popularly used in LiDAR-based AD perception. We evaluate our attack on MSF algorithms included in representative open-source industry-grade AD systems in real-world driving scenarios. Our results show that the attack achieves over 90% success rate across different object types and MSF algorithms. Our attack is also found stealthy, robust to victim positions, transferable across MSF algorithms, and physical-world realizable after being 3D-printed and captured by LiDAR and camera devices. To concretely assess the end-to-end safety impact, we further perform simulation evaluation and show that it can cause a 100% vehicle collision rate for an industry-grade AD system. We also evaluate and discuss defense strategies. 
    more » « less
  4. Blum, A (Ed.)
    Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naïve composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition. 
    more » « less
  5. Pérez, Guillermo A.; Raskin, Jean-François (Ed.)
    Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, the verification community has begun developing techniques for rigorously analyzing DNNs, with numerous verification algorithms proposed in recent years. While a significant amount of work has gone into developing these verification algorithms, little work has been devoted to rigorously studying the computability and complexity of the underlying theoretical problems. Here, we seek to contribute to the bridging of this gap. We focus on two kinds of DNNs: those that employ piecewise-linear activation functions (e.g., ReLU), and those that employ piecewise-smooth activation functions (e.g., Sigmoids). We prove the two following theorems: 1. The decidability of verifying DNNs with piecewise-smooth activation functions is equivalent to a well-known, open problem formulated by Tarski; and 2. The DNN verification problem for any quantifier-free linear arithmetic specification can be reduced to the DNN reachability problem, whose approximation is NP-complete. These results answer two fundamental questions about the computability and complexity of DNN verification, and the ways it is affected by the network’s activation functions and error tolerance; and could help guide future efforts in developing DNN verification tools. 
    more » « less