In this article, we ask whether macro-level changes during the first year of the COVID-19 pandemic relate to changes in the levels of discrimination against women and Black job-seekers at the point of hire. We develop three main hypotheses: that discrimination against women and Black job-seekers increases due to a reduction in labor demand; that discrimination against women decreases due to the reduced supply of women employees and applicants; and that discrimination against Black job-seekers decreases due to increased attention toward racial inequities associated with the Black Lives Matter protests during the summer of 2020. We test these hypotheses using a correspondence audit study collected over two periods, before and during the early COVID-19 pandemic, for one professional occupation: accountants. We find that White women experience a positive change in callbacks during the pandemic, being preferred over White men, and this change is concentrated in geographic areas that experienced relatively larger decreases in women's labor supply. Black women experience discrimination pre-pandemic but receive similar callbacks to White men during the pandemic. In contrast to both White and Black women, discrimination against Black men is persistent before and during the pandemic. Our findings are consistent with the prediction of gender-specific changes in labor supply being associated with gender-specific changes in hiring discrimination during the COVID-19 pandemic. More broadly, our study shows how hiring decision-making is related to macro-level labor market processes.
more »
« less
Race, Gender and Beauty: The Effect of Information Provision on Online Hiring Biases
We conduct a study of hiring bias on a simulation platform where we ask Amazon MTurk participants to make hiring decisions for a mathematically intensive task. Our findings suggest hiring biases against Black workers and less attractive workers, and preferences towards Asian workers, female workers and more attractive workers. We also show that certain UI designs, including provision of candidates' information at the individual level and reducing the number of choices, can significantly reduce discrimination. However, provision of candidate's information at the subgroup level can increase discrimination. The results have practical implications for designing better online freelance marketplaces.
more »
« less
- PAR ID:
- 10178842
- Date Published:
- Journal Name:
- Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- 1 to 11
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The needs of a business (e.g., hiring) may require the use of certain features that are critical in a way that any discrimination arising due to them should be exempted. In this work, we propose a novel information-theoretic decomposition of the total discrimination (in a counterfactual sense) into a non-exempt component, which quantifies the part of the discrimination that cannot be accounted for by the critical features, and an exempt component, which quantifies the remaining discrimination. Our decomposition enables selective removal of the non-exempt component if desired. We arrive at this decomposition through examples and counterexamples that enable us to first obtain a set of desirable properties that any measure of non-exempt discrimination should satisfy. We then demonstrate that our proposed quantification of non-exempt discrimination satisfies all of them. This decomposition leverages a body of work from information theory called Partial Information Decomposition (PID). We also obtain an impossibility result showing that no observational measure of non-exempt discrimination can satisfy all of the desired properties, which leads us to relax our goals and examine alternative observational measures that satisfy only some of these properties. We then perform a case study using one observational measure to show how one might train a model allowing for exemption of discrimination due to critical features.more » « less
-
The individualistic nature of gig work allows workers to have high levels of flexibility, but it also leads to atomization, leaving them isolated from peer workers. In this paper, we employed a qualitative approach to understand how online social media groups provide informational and emotional support to physical gig workers during the COVID-19 pandemic. We found that social media groups alleviate the atomization effect, as workers use these groups to obtain experiential knowledge from their peers, build connections, and organize collective action. However, we noted a reluctance among workers to share strategic information where there was a perceived risk of being competitively disadvantaged. In addition, we found that the diversity among gig workers has also led to limited empathy for one another, which further impedes the provision of emotional support. While social media groups could potentially become places where workers organize collective efforts, several factors, including the uncertainty of other workers' activities and the understanding of the independent contractor status, have diminished the effectiveness of efforts at collective action.more » « less
-
Otterbring, Tobias (Ed.)Extensive literature probes labor market discrimination through correspondence studies in which researchers send pairs of resumes to employers, which are closely matched except for social signals such as gender or ethnicity. Upon perceiving these signals, individuals quickly activate associated stereotypes. The Stereotype Content Model (SCM; Fiske 2002) categorizes these stereotypes into two dimensions: warmth and competence. Our research integrates findings from correspondence studies with theories of social psychology, asking: Can discrimination between social groups, measured through employer callback disparities, be predicted by warmth and competence perceptions of social signals? We collect callback rates from 21 published correspondence studies, varying for 592 social signals. On those social signals, we collected warmth and competence perceptions from an independent group of online raters. We found that social perception predicts callback disparities for studies varying race and gender, which are indirectly signaled by names on these resumes. Yet, for studies adjusting other categories like sexuality and disability, the influence of social perception on callbacks is inconsistent. For instance, a more favorable perception of signals like parenthood does not consistently lead to increased callbacks, underscoring the necessity for further research. Our research offers pivotal strategies to address labor market discrimination in practice. Leveraging the warmth and competence framework allows for the predictive identification of bias against specific groups without extensive correspondence studies. By distilling hiring discrimination into these two dimensions, we not only facilitate the development of decision support systems for hiring managers but also equip computer scientists with a foundational framework for debiasing Large Language Models and other methods that are increasingly employed in hiring processes.more » « less
-
null (Ed.)A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law and technology studies and employment & labor law. Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools which make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.more » « less
An official website of the United States government

