skip to main content


Title: Algorithmic equity in the hiring of underrepresented IT job candidates
Purpose The purpose of this paper is to offer a critical analysis of talent acquisition software and its potential for fostering equity in the hiring process for underrepresented IT professionals. The under-representation of women, African-American and Latinx professionals in the IT workforce is a longstanding issue that contributes to and is impacted by algorithmic bias. Design/methodology/approach Sources of algorithmic bias in talent acquisition software are presented. Feminist design thinking is presented as a theoretical lens for mitigating algorithmic bias. Findings Data are just one tool for recruiters to use; human expertise is still necessary. Even well-intentioned algorithms are not neutral and should be audited for morally and legally unacceptable decisions. Feminist design thinking provides a theoretical framework for considering equity in the hiring decisions made by talent acquisition systems and their users. Social implications This research implies that algorithms may serve to codify deep-seated biases, making IT work environments just as homogeneous as they are currently. If bias exists in talent acquisition software, the potential for propagating inequity and harm is far more significant and widespread due to the homogeneity of the specialists creating artificial intelligence (AI) systems. Originality/value This work uses equity as a central concept for considering algorithmic bias in talent acquisition. Feminist design thinking provides a framework for fostering a richer understanding of what fairness means and evaluating how AI software might impact marginalized populations.  more » « less
Award ID(s):
1841368
NSF-PAR ID:
10202431
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Online Information Review
Volume:
44
Issue:
2
ISSN:
1468-4527
Page Range / eLocation ID:
383 to 395
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery—generally referred to as “technology assisted review” (TAR)—increasingly rely on “predictive coding”: machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships—and the ways in which these systems are shaping the construction and presentation of knowledge—lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in the e-discovery technology space—the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice—we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis)alignment of machine reasoning with professional logic and ethics. Based on our findings, we conclude that predictive coding tools—and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice— challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex and more consequential, it is unreasonable to rely solely on legal professionals—judges, law firms, and lawyers—to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgment of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions—e.g., whether documents are responsive, or whether the standard of proportionality has been met—is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability— providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality, remain salient and demand engagement from lawyers, not just their technical experts. 
    more » « less
  2. Algorithms are a central component of most services we use across a range of domains. These services, platforms, and devices rely on computing and technology professionals – who work as data scientists, programmers, or artificial intelligence (AI) experts – to meet their intended goals. How do we train future professionals to have an ethical mindset in their understanding, design, and implementation of algorithms? This was the question that prompted the use of a role-playing case study, which we designed, implemented, and studied in an undergraduate engineering course. We used the Boeing Max 8 flight disaster as the scenario for this case study as it encapsulates how a software algorithm shapes decision-making in a complex scenario. Theoretically, our work is guided by the situated learning paradigm, specifically the need to learn perspectival thinking for decision-making. The ability to make ethical decisions relies to a large extent on the ability of the decision-maker to take context into account – to understand not just the immediate technical need of the work but also larger implications that might even result from unanticipated consequences. Findings from the evaluation of the role-play scenario show that students reported a higher engagement with case study material and a better understanding of the scenario due to taking on a specific role related to the scenario. Analysis of pre-and post-discussion assignments shows a shift in their perspective of the case, further supporting the overall goal of developing a more situated understanding of ethical decision-making. 
    more » « less
  3. Artificial Intelligence (AI) bots receive much attention and usage in industry manufacturing and even store cashier applications. Our research is to train AI bots to be software engineering assistants, specifically to detect biases and errors inside AI software applications. An example application is an AI machine learning system that sorts and classifies people according to various attributes, such as the algorithms involved in criminal sentencing, hiring, and admission practices. Biases, unfair decisions, and flaws in terms of the equity, diversity, and justice presence, in such systems could have severe consequences. As a Hispanic-Serving Institution, we are concerned about underrepresented groups and devoted an extended amount of our time to implementing “An Assure AI” (AAAI) Bot to detect biases and errors in AI applications. Our state-of-the-art AI Bot was developed based on our previous accumulated research in AI and Deep Learning (DL). The key differentiator is that we are taking a unique approach: instead of cleaning the input data, filtering it out and minimizing its biases, we trained our deep Neural Networks (NN) to detect and mitigate biases of existing AI models. The backend of our bot uses the Detection Transformer (DETR) framework, developed by Facebook, 
    more » « less
  4. Abstract

    In response to Li, Reigh, He, and Miller's commentary,Can we and should we use artificial intelligence for formative assessment in science, we argue that artificial intelligence (AI) is already being widely employed in formative assessment across various educational contexts. While agreeing with Li et al.'s call for further studies on equity issues related to AI, we emphasize the need for science educators to adapt to the AI revolution that has outpaced the research community. We challenge the somewhat restrictive view of formative assessment presented by Li et al., highlighting the significant contributions of AI in providing formative feedback to students, assisting teachers in assessment practices, and aiding in instructional decisions. We contend that AI‐generated scores should not be equated with the entirety of formative assessment practice; no single assessment tool can capture all aspects of student thinking and backgrounds. We address concerns raised by Li et al. regarding AI bias and emphasize the importance of empirical testing and evidence‐based arguments in referring to bias. We assert that AI‐based formative assessment does not necessarily lead to inequity and can, in fact, contribute to more equitable educational experiences. Furthermore, we discuss how AI can facilitate the diversification of representational modalities in assessment practices and highlight the potential benefits of AI in saving teachers’ time and providing them with valuable assessment information. We call for a shift in perspective, from viewing AI as a problem to be solved to recognizing its potential as a collaborative tool in education. We emphasize the need for future research to focus on the effective integration of AI in classrooms, teacher education, and the development of AI systems that can adapt to diverse teaching and learning contexts. We conclude by underlining the importance of addressing AI bias, understanding its implications, and developing guidelines for best practices in AI‐based formative assessment.

     
    more » « less
  5. null (Ed.)
    A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law and technology studies and employment & labor law. Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools which make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision. 
    more » « less