Title: Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination
There has been growing recognition of the crucial role users, especially those from marginalized groups, play in uncovering harmful algorithmic biases. However, it remains unclear how users’ identities and experiences might impact their rating of harmful biases. We present an online experiment (N=2,197) examining these factors: demographics, discrimination experiences, and social and technical knowledge. Participants were shown examples of image search results, including ones that previous literature has identified as biased against marginalized racial, gender, or sexual orientation groups. We found participants from marginalized gender or sexual orientation groups were more likely to rate the examples as more severely harmful. Belonging to marginalized races did not have a similar pattern. Additional factors affecting users’ ratings included discrimination experiences, and having friends or family belonging to marginalized demographics. A qualitative analysis offers insights into users' bias recognition, and why they see biases the way they do. We provide guidance for designing future methods to support effective user-driven auditing. more »« less
Jennings, M.; Kellam, N.; Coley, B.
(, Frontiers in Education)
null
(Ed.)
"Many engineering students from marginalized populations have had negative experiences regarding their identities (race/ethnicity, sexual orientation, gender expression, etc.) as a result of the culture in engineering. These negative experiences may range from microaggressions regarding a person’s marginalized identity – “It’s impressive for a woman to be in engineering” – to outright discrimination, such as being called a demoralizing slur by a peer. Often associated with these experiences is a lack of support or intervention from faculty, advisors, or staff, as well as difficulty finding mentors that students can identify with. This special session aims to share authentic stories from diverse engineering students to facilitate a discussion of solutions amongst attendees."
Fairchild, Ennea; Sexton, Julie; Newman, Harmony; Hinerman, Krystal; McKay, Jessica; Riggs, Eric
(, Journal of Geoscience Education)
Undergraduate summer field programs are valuable experiences that can foster or reduce students’ self-efficacy, an important factor in students’ success and retention in geoscience. Growing research findings show that science field experiences can be hostile and unwelcoming to students with marginalized identities, which may negatively impact their self-efficacy in geoscience, a discipline with a dearth of students from underrepresented, marginalized identities. We conducted an interpretive qualitative study examining how summer geoscience field programs affected two undergraduate, marginalized students’ self-efficacy. Adding to existing theoretical explanations of self-efficacy, we identified three types of self-efficacy impacted positively and negatively by geoscience field experiences: academic, physical, and social self-efficacy. We developed a nuanced understanding of the specific field experiences that influenced the ‘ups and downs’ of students’ self-efficacy and, ultimately, their intent in continuing to pursue a geoscience education or career. Despite negative experiences, including gender discrimination, crude sexual jokes, and a lack of belonging, the students described their intent to persist in geoscience. Our findings can assist geoscience educators (and others in field-based sciences) to consider experiences that support and hinder marginalized students’ self-efficacy. Also, our findings can guide efforts to improve geoscience field programs to create more inclusive environments.
The cybersecurity workforce lacks diversity; the field is predominately men and White or Asian, with only 10% identifying as women, Latine, or Black. Previous studies identified access to supportive communities as a possible disparity between marginalized and non-marginalized cybersecurity professional populations and highlighted this support as a key to career success. We focus on these community experiences by conducting a survey of 342 cybersecurity professionals to identify differences in perceptions and experiences of belonging across demographic groups. Our results show a discrepancy between experiences for different gender identities, with women being more likely than men to report instances of harassment and encountering unsupportive environments because of their gender. Psychological safety was low across all demographic groups, meaning participants did not feel comfortable engaging with or speaking up in the community. Based on these result we provide recommendations to community leaders.
Katcher, Samantha; Wang, Liana; Yang, Caroline; Messdaghi, Chloé; Mazurek, Michelle L; Chetty, Marshini; Fulton, Kelsey R; Votipka, Daniel
(, USENIX Symposium on Usable Privacy and Security (SOUPS))
The cybersecurity workforce lacks diversity; the field is predominately men and White or Asian, with only 10% identifying as women, Latine, or Black. Previous studies identified access to supportive communities as a possible disparity between marginalized and non-marginalized cybersecurity professional populations and highlighted this support as a key to career success. We focus on these community experiences by conducting a survey of 342 cybersecurity professionals to identify differences in perceptions and experiences of belonging across demographic groups. Our results show a discrepancy between experiences for different gender identities with women being more likely than men to report experiencing harassment and unsupportive environments because of their gender. Psychological safety was low across all demographic groups, meaning participants did not feel comfortable engaging with or speaking up in the community. Based on these result we provide recommendations to community leaders.
Mardani, Mojdeh; Stupnisky, Robert
(, International Conference on Gender Research)
Negative and often unconscious beliefs about marginalised groups, including women and people of colour, sometimes manifest in discriminatory and degrading slights called microaggressions. Since most often microaggressions are in the form of subtle actions, unobtrusive comments, or humorous gestures, they are frequently overlooked as innocent and harmless, specifically to bystanders. However, their adverse effects on those on the receiving end are anything but innocuous, even if perpetrators are utterly unaware of their harmful comments or behaviours. Minorities and marginalized individuals often find microaggressions more harmful than blatant racism and discrimination. Six hundred and eleven STEM (Science, Technology, Engineering, Math) faculty from ten USA universities completed an online survey in the spring of 2021, of which 39% self-identified as Underrepresented Minority, URM, faculty. This study revealed that on average, URM women were 50% more susceptible to gender microaggressions, which correlated negatively with autonomy (having choice) and competence (being capable and effective), and positively with amotivation (lack of motivation). Case in point, 38% of them believed their opinions were overlooked in a group discussion because of their gender. Women with intersecting identities, such as women of colour, experienced both forms of gender and racial/ethnic microaggressions. They have experienced being ignored at work, being treated differently, and their opinion being overlooked based on their gender and/or their race/ethnicity. While detecting bias and microaggression and acknowledging their occurrence is crucial, taking deliberate and precise actions to disrupt and prevent them from re-occurring is even more pivotal. By realising the prevalence of discrimination and microaggressions towards underrepresented minority female faculty, and sharing insights into the complex and overarching race, ethnic, and gender relations among other social constructs, this study deepens our understanding of the challenges and barriers that this group has to grapple with. By adopting and creating effective institutional policies and professional training in support of diversity, inclusion, and cultural competency we can improve the experiences of URM faculty and positively impact their motivation and productivity.
Kingsley, Sara, Zhi, Jiayin, Deng, Wesley Hanwen, Lee, Jaimie, Zhang, Sizhe, Eslami, Motahhare, Holstein, Kenneth, Hong, Jason I, Li, Tianshi, and Shen, Hong. Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination. Retrieved from https://par.nsf.gov/biblio/10573774. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 12. Web. doi:10.1609/hcomp.v12i1.31602.
Kingsley, Sara, Zhi, Jiayin, Deng, Wesley Hanwen, Lee, Jaimie, Zhang, Sizhe, Eslami, Motahhare, Holstein, Kenneth, Hong, Jason I, Li, Tianshi, & Shen, Hong. Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 12 (). Retrieved from https://par.nsf.gov/biblio/10573774. https://doi.org/10.1609/hcomp.v12i1.31602
Kingsley, Sara, Zhi, Jiayin, Deng, Wesley Hanwen, Lee, Jaimie, Zhang, Sizhe, Eslami, Motahhare, Holstein, Kenneth, Hong, Jason I, Li, Tianshi, and Shen, Hong.
"Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 12 (). Country unknown/Code not available: AAAI. https://doi.org/10.1609/hcomp.v12i1.31602.https://par.nsf.gov/biblio/10573774.
@article{osti_10573774,
place = {Country unknown/Code not available},
title = {Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination},
url = {https://par.nsf.gov/biblio/10573774},
DOI = {10.1609/hcomp.v12i1.31602},
abstractNote = {There has been growing recognition of the crucial role users, especially those from marginalized groups, play in uncovering harmful algorithmic biases. However, it remains unclear how users’ identities and experiences might impact their rating of harmful biases. We present an online experiment (N=2,197) examining these factors: demographics, discrimination experiences, and social and technical knowledge. Participants were shown examples of image search results, including ones that previous literature has identified as biased against marginalized racial, gender, or sexual orientation groups. We found participants from marginalized gender or sexual orientation groups were more likely to rate the examples as more severely harmful. Belonging to marginalized races did not have a similar pattern. Additional factors affecting users’ ratings included discrimination experiences, and having friends or family belonging to marginalized demographics. A qualitative analysis offers insights into users' bias recognition, and why they see biases the way they do. We provide guidance for designing future methods to support effective user-driven auditing.},
journal = {Proceedings of the AAAI Conference on Human Computation and Crowdsourcing},
volume = {12},
publisher = {AAAI},
author = {Kingsley, Sara and Zhi, Jiayin and Deng, Wesley Hanwen and Lee, Jaimie and Zhang, Sizhe and Eslami, Motahhare and Holstein, Kenneth and Hong, Jason I and Li, Tianshi and Shen, Hong},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.