skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Algorithmic equity in the hiring of underrepresented IT job candidates
Purpose The purpose of this paper is to offer a critical analysis of talent acquisition software and its potential for fostering equity in the hiring process for underrepresented IT professionals. The under-representation of women, African-American and Latinx professionals in the IT workforce is a longstanding issue that contributes to and is impacted by algorithmic bias. Design/methodology/approach Sources of algorithmic bias in talent acquisition software are presented. Feminist design thinking is presented as a theoretical lens for mitigating algorithmic bias. Findings Data are just one tool for recruiters to use; human expertise is still necessary. Even well-intentioned algorithms are not neutral and should be audited for morally and legally unacceptable decisions. Feminist design thinking provides a theoretical framework for considering equity in the hiring decisions made by talent acquisition systems and their users. Social implications This research implies that algorithms may serve to codify deep-seated biases, making IT work environments just as homogeneous as they are currently. If bias exists in talent acquisition software, the potential for propagating inequity and harm is far more significant and widespread due to the homogeneity of the specialists creating artificial intelligence (AI) systems. Originality/value This work uses equity as a central concept for considering algorithmic bias in talent acquisition. Feminist design thinking provides a framework for fostering a richer understanding of what fairness means and evaluating how AI software might impact marginalized populations.  more » « less
Award ID(s):
1841368
PAR ID:
10202431
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Online Information Review
Volume:
44
Issue:
2
ISSN:
1468-4527
Page Range / eLocation ID:
383 to 395
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Artificial Intelligence (AI) bots receive much attention and usage in industry manufacturing and even store cashier applications. Our research is to train AI bots to be software engineering assistants, specifically to detect biases and errors inside AI software applications. An example application is an AI machine learning system that sorts and classifies people according to various attributes, such as the algorithms involved in criminal sentencing, hiring, and admission practices. Biases, unfair decisions, and flaws in terms of the equity, diversity, and justice presence, in such systems could have severe consequences. As a Hispanic-Serving Institution, we are concerned about underrepresented groups and devoted an extended amount of our time to implementing “An Assure AI” (AAAI) Bot to detect biases and errors in AI applications. Our state-of-the-art AI Bot was developed based on our previous accumulated research in AI and Deep Learning (DL). The key differentiator is that we are taking a unique approach: instead of cleaning the input data, filtering it out and minimizing its biases, we trained our deep Neural Networks (NN) to detect and mitigate biases of existing AI models. The backend of our bot uses the Detection Transformer (DETR) framework, developed by Facebook, 
    more » « less
  2. null (Ed.)
    A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law and technology studies and employment & labor law. Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools which make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision. 
    more » « less
  3. Algorithms are a central component of most services we use across a range of domains. These services, platforms, and devices rely on computing and technology professionals – who work as data scientists, programmers, or artificial intelligence (AI) experts – to meet their intended goals. How do we train future professionals to have an ethical mindset in their understanding, design, and implementation of algorithms? This was the question that prompted the use of a role-playing case study, which we designed, implemented, and studied in an undergraduate engineering course. We used the Boeing Max 8 flight disaster as the scenario for this case study as it encapsulates how a software algorithm shapes decision-making in a complex scenario. Theoretically, our work is guided by the situated learning paradigm, specifically the need to learn perspectival thinking for decision-making. The ability to make ethical decisions relies to a large extent on the ability of the decision-maker to take context into account – to understand not just the immediate technical need of the work but also larger implications that might even result from unanticipated consequences. Findings from the evaluation of the role-play scenario show that students reported a higher engagement with case study material and a better understanding of the scenario due to taking on a specific role related to the scenario. Analysis of pre-and post-discussion assignments shows a shift in their perspective of the case, further supporting the overall goal of developing a more situated understanding of ethical decision-making. 
    more » « less
  4. Biased AI models result in unfair decisions. In response, a number of algorithmic solutions have been engineered to mitigate bias, among which the Synthetic Minority Oversampling Technique (SMOTE) has been studied, to an extent. Although the SMOTE technique and its variants have great potentials to help improve fairness, there is little theoretical justification for its success. In addition, formal error and fairness bounds are not clearly given. This paper attempts to address both issues. We prove and demonstrate that synthetic data generated by oversampling underrepresented groups can mitigate algorithmic bias in AI models, while keeping the predictive errors bounded. We further compare this technique to the existing state-of-the-art fair AI techniques on five datasets using a variety of fairness metrics. We show that this approach can effectively improve fairness even when there is a significant amount of label and selection bias, regardless of the baseline AI algorithm. 
    more » « less
  5. Abstract Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle —from conception to implementation—with an emphasis on reducing inequity in artificial intelligence (AI) and machine learning (ML) applications. Simply stating that equity should be integrated throughout, however, leaves much to be desired as industrial ecology (IE) researchers, practitioners, and decision‐makers attempt to employ equitable practices. In this forum piece, we use a critical review approach to explain how socioecological inequities emerge in ML applications across their life cycle stages by leveraging the food system. We exemplify the use of a comprehensive questionnaire to delineate unfair ML bias across data bias, algorithmic bias, and selection and deployment bias categories. Finally, we provide consolidated guidance and tailored strategies to help address AI/ML unfair bias and inequity in IE applications. Specifically, the guidance and tools help to address sensitivity, reliability, and uncertainty challenges. There is also discussion on how bias and inequity in AI/ML affect other IE research and design domains, besides the food system—such as living labs and circularity. We conclude with an explanation of the future directions IE should take to address unfair bias and inequity in AI/ML. Last, we call for systemic equity to be embedded throughout IE applications to fundamentally understand domain‐specific socioecological inequities, identify potential unfairness in ML, and select mitigation strategies in a manner that translates across different research domains. 
    more » « less