We explore gender bias in the presence of facial masks in automated face recognition systems using various deep learning algorithms in this research study. The paper focuses on an experimental study using an imbalanced image database with a smaller percentage of female subjects compared to a larger percentage of male subjects and examines the impact of masked images in evaluating gender bias. The conducted experiments aim to understand how different algorithms perform in mitigating gender bias in the presence of face masks and highlight the significance of gender distribution within datasets in identifying and mitigating bias. We present the methodology used to conduct the experiments and elaborate the results obtained from male only, female only, and mixed-gender datasets. Overall, this research sheds light on the complexities of gender bias in masked versus unmasked face recognition technology and its implications for real-world applications.
more »
« less
Gender Bias in Contextualized Word Embeddings
In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo’s contextualized word vectors. First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2) the trained ELMo embeddings systematically encode gender information and (3) ELMo unequally encodes gender information about male and female entities. Then, we show that a state-of-the-art coreference system that depends on ELMo inherits its bias and demonstrates significant bias on the WinoBias probing corpus. Finally, we explore two methods to mitigate such gender bias and show that the bias demonstrated on WinoBias can be eliminated.
more »
« less
- Award ID(s):
- 1760523
- PAR ID:
- 10144868
- Date Published:
- Journal Name:
- Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Volume:
- 1
- Page Range / eLocation ID:
- 629 to 634
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Gender, Soft Skills, and Patient Experience in Online Physician Reviews: A Large-Scale Text AnalysisBackground Online physician reviews are an important source of information for prospective patients. In addition, they represent an untapped resource for studying the effects of gender on the doctor-patient relationship. Understanding gender differences in online reviews is important because it may impact the value of those reviews to patients. Documenting gender differences in patient experience may also help to improve the doctor-patient relationship. This is the first large-scale study of physician reviews to extensively investigate gender bias in online reviews or offer recommendations for improvements to online review systems to correct for gender bias and aid patients in selecting a physician. Objective This study examines 154,305 reviews from across the United States for all medical specialties. Our analysis includes a qualitative and quantitative examination of review content and physician rating with regard to doctor and reviewer gender. Methods A total of 154,305 reviews were sampled from Google Place reviews. Reviewer and doctor gender were inferred from names. Reviews were coded for overall patient experience (negative or positive) by collapsing a 5-star scale and coded for general categories (process, positive/negative soft skills), which were further subdivided into themes. Computational text processing methods were employed to apply this codebook to the entire data set, rendering it tractable to quantitative methods. Specifically, we estimated binary regression models to examine relationships between physician rating, patient experience themes, physician gender, and reviewer gender). Results Female reviewers wrote 60% more reviews than men. Male reviewers were more likely to give negative reviews (odds ratio [OR] 1.15, 95% CI 1.10-1.19; P<.001). Reviews of female physicians were considerably more negative than those of male physicians (OR 1.99, 95% CI 1.94-2.14; P<.001). Soft skills were more likely to be mentioned in the reviews written by female reviewers and about female physicians. Negative reviews of female doctors were more likely to mention candor (OR 1.61, 95% CI 1.42-1.82; P<.001) and amicability (OR 1.63, 95% CI 1.47-1.90; P<.001). Disrespect was associated with both female physicians (OR 1.42, 95% CI 1.35-1.51; P<.001) and female reviewers (OR 1.27, 95% CI 1.19-1.35; P<.001). Female patients were less likely to report disrespect from female doctors than expected from the base ORs (OR 1.19, 95% CI 1.04-1.32; P=.008), but this effect overrode only the effect for female reviewers. Conclusions This work reinforces findings in the extensive literature on gender differences and gender bias in patient-physician interaction. Its novel contribution lies in highlighting gender differences in online reviews. These reviews inform patients’ choice of doctor and thus affect both patients and physicians. The evidence of gender bias documented here suggests review sites may be improved by providing information about gender differences, controlling for gender when presenting composite ratings for physicians, and helping users write less biased reviews.more » « less
-
Purpose: The equitable distribution of donor kidneys is crucial to maximizing transplant success rates and addressing disparities in healthcare data. This study examines potential gender bias in the Deceased Donor Organ Allocation Model (DDOA) by using machine learning and AI to analyze its impact on kidney discard decisions to ensure fairness in accordance with medical ethics. Methods: The study employs the Deceased Donor Organ Allocation Model (DDOA) model (https://ddoa.mst.hekademeia.org/#/kidney) to predict the discard probability of deceased donor kidneys using donor characteristic from the OPTN Deceased Donor Dataset (2016-2023). Using the SRTR SAF dictionary, the dataset consists of 18,029 donor records, where gender was assessed for its effect on discard probability. ANOVA and t-test determines whether there is a statistically significant difference between the discard percentages for female and male donors by changing the donor gender data alone. If the p-value obtained from the t-test is less than the significance level (typically 0.05), we reject the null hypothesis and conclude that there is a significant difference. Otherwise, we fail to reject the null hypothesis. Results: Figure 1 visualizes the differences in discard percentages between female and male donor kidneys, with an unbiased allocation system expected to show no difference (i.e., a value of zero). To assess the presence of gender bias, statistical analyses, including t-tests and ANOVA were performed. The t-test comparing female and male kidney discard rates yielded a t-statistic of 29.690228, with a p-value of 3.586956e-189 < 0.05 significance threshold. This result leads to the rejection of the null hypothesis, indicating a significant difference was found between the mean when altering only the donor gender attribute in the DDOA model making it play a significant role in discard decisions. Conclusions: The study highlights that a significant difference was found between the mean by altering only the donor gender attribute, contributing to kidney discard rates in the DDOA model. These findings reinforce the need for greater transparency in organ allocation models and a reconsideration of the demographic criteria used in the evaluation process. Future research should refine algorithms to minimize biases in organ allocation and investigate kidney discard disparities in transplantation.more » « less
-
Attributing gender discrimination to implicit bias has become increasingly common. However, research suggests that when discrimination is attributed to implicit rather than explicit bias, the perpetrators are held less accountable and deemed less worthy of punishment. The present work examines (a) whether this effect replicates in the domain of gender discrimination, and (b) whether sharing a group membership with the victim moderates the effect. Four studies revealed that both men and women hold perpetrators of gender discrimination less accountable if their behavior is attributed to implicit rather than explicit bias. Moreover, women held male (Studies 1–3), but not female (Study 4), perpetrators of gender discrimination more accountable than did men. Together, these findings suggest that while shared gender group membership may inform judgments of accountability for gender discrimination, it does not weaken the tendency to hold perpetrators less accountable for discrimination attributed to implicit, compared with explicit, bias.more » « less
-
Little is known about what drives gender disparities in health care and related social insurance benefits. Using data and variation from the Texas workers’ compensation program, we study the impact of gender match between doctors and patients on medical evaluations and associated disability benefits. Compared to differences among their male patient counterparts, female patients randomly assigned a female doctor rather than a male doctor are 5.2 percent more likely to be evaluated as disabled and receive 8.6 percent more subsequent cash benefits on average. There is no analogous gender-match effect for male patients. Our estimates indicate that increasing the share of female patients evaluated by female doctors may substantially shrink gender gaps in medical evaluations and associated outcomes. (JEL H75, I11, I12, J14, J16, J28)more » « less
An official website of the United States government

