Efforts to mitigate bias in faculty hiring processes are well-documented in the literature. Yet, significant barriers to the hiring of racially minoritized and White women in many STEM fields remain. An underreported barrier to inclusive hiring is assessment of risk. Guided by theory from behavioral economics, social psychology, and decision-making, we examine the inner workings of five faculty search committees to understand how committee members identified and assessed risk with particular attention to assessments of risk that became intermingled with social biases. Committees identified and assessed five risks, including candidate interest, candidate disciplinary expertise, candidate competence, candidate collegiality, and the timing and oversight of the search process itself. We discuss implications of risk identification and assessment for effective and inclusive searches.
more »
« less
Nudging Toward Diversity: Applying Behavioral Design to Faculty Hiring
This narrative and integrative literature review synthesizes the literature on when, where, and how the faculty hiring process used in most American higher education settings operates with implicit and cognitive bias. The literature review analyzes the “four phases” of the faculty hiring process, drawing on theories from behavioral economics and social psychology. The results show that although much research establishes the presence of bias in hiring, relatively few studies examine interventions or “nudges” that might be used to mitigate bias and encourage the recruitment and hiring of faculty identified as women and/or faculty identified as being from an underrepresented minority group. This article subsequently makes recommendations for historical, quasi-experimental, and randomized studies to test hiring interventions with larger databases and more controlled conditions than have previously been used, with the goal of establishing evidence-based practices that contribute to a more inclusive hiring process and a more diverse faculty.
more »
« less
- PAR ID:
- 10172481
- Date Published:
- Journal Name:
- Review of Educational Research
- Volume:
- 90
- Issue:
- 3
- ISSN:
- 0034-6543
- Page Range / eLocation ID:
- 311 to 348
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Many colleges and universities now require faculty search committees to use rubrics when evaluating faculty job candidates, as proponents believe these “decision-support tools” can reduce the impact of bias in candidate evaluation. That is, rubrics are intended to ensure that candidates are evaluated more fairly, which is then thought to contribute to the enhanced hiring of candidates from minoritized groups. However, there is scant — and even contradictory — evidence to support this claim. This study used a multiple case study methodology to explore how five faculty search committees used rubrics in candidate evaluation, and the extent to which using a rubric seemed to perpetuate or mitigate bias in committee decision-making. Results showed that the use of rubrics can improve searches by clarifying criteria, encouraging criteria use in evaluation, calibrating the application of criteria to evidence, and in some cases, bringing diversity, equity, and inclusion work (DEI) into consideration. However, search committees also created and implemented rubrics in ways that seem to perpetuate bias, undermine effectiveness, and potentially contribute to the hiring of fewer minoritized candidates. We conclude by providing stakeholders with practical recommendations on using rubrics and actualizing DEI in faculty hiring.more » « less
-
Abstract BackgroundThe lack of racial diversity in science, technology, engineering, and mathematics (STEM) disciplines is perhaps one of the most challenging issues in the United States higher education system. The issue is not only concerning diverse students, but also diverse faculty members. One important contributing factor is the faculty hiring process. To make progress toward equity in hiring decisions, it is necessary to better understand how applicants are considered and evaluated. In this paper, we describe and present our study based on a survey of current STEM faculty members and administrators who examined applicant qualifications and characteristics in STEM faculty hiring decisions. ResultsThere are three key findings of the present research. First, we found that faculty members placed different levels of importance on characteristics and qualifications for tenure track hiring and non-tenure track hiring. For example, items related to research were more important when evaluating tenure track applicants, whereas items related to teaching and diversity were more important when evaluating non-tenure track applicants. Second, faculty members’ institutional classification, position, and personal identities (e.g., gender, race/ethnicity) had an impact on their evaluation criteria. For instance, we found men considered some diversity-related items more important than women. Third, faculty members rated the importance of qualifications with diversity, equity, and inclusion (DEI)-related constructs significantly lower than qualifications that did not specify DEI-related constructs, and this trend held for both tenure track and non-tenure track faculty hiring. ConclusionsThis study was an attempt to address the issue of diversity in STEM faculty hiring at institutions of higher education by examining how applicant characteristics are considered and evaluated in faculty hiring practices. Emphasizing research reputation and postdoctoral reputation while neglecting institutional diversity and equitable and inclusive teaching, research, and service stunt progress toward racial diversity because biases—both implicit and explicit, both positive and negative—still exist. Our results were consistent with research on bias in recruitment, revealing that affinity bias, confirmation bias, and halo bias exist in the faculty hiring process. These biases contribute to inequities in hiring, and need to be addressed before we can reach, sustain, and grow desired levels of diversity.more » « less
-
Research has documented the presence of bias against women in hiring, including in academic science, technology, engineering, and mathematics (STEM). Hiring rubrics (also called criterion checklists, decision support tools, and evaluation tools) are widely recommended as a precise, cost-effective remedy to counteract hiring bias, despite a paucity of evidence that they actually work (see table S8). Our in-depth case study of rubric usage in faculty hiring in an academic engineering department in a very research-active university found that the rate of hiring women increased after the department deployed rubrics and used them to guide holistic discussions. Yet we also found evidence of substantial gender bias persisting in some rubric scoring categories and evaluators’ written comments. We do not recommend abandoning rubrics. Instead, we recommend a strategic and sociologically astute use of rubrics as a department self-study tool within the context of a holistic evaluation of semifinalist candidates.more » « less
-
null (Ed.)A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law and technology studies and employment & labor law. Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools which make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.more » « less
An official website of the United States government

