As dating websites are becoming an essential part of how people meet intimate and romantic partners, it is vital to design these systems to be resistant to, or at least do not amplify, bias and discrimination. Instead, the results of our online experiment with a simulated dating website, demonstrate that popular dating website design choices, such as the user of the swipe interface (swiping in one direction to indicate a like and in the other direction to express a dislike) and match scores, resulted in people racially biases choices even when they explicitly claimed not to have considered race in their decision-making. This bias was significantly reduced when the order of information presentation was reversed such that people first saw substantive profile information related to their explicitly-stated preferences before seeing the profile name and photo. These results indicate that currently-popular design choices amplify people's implicit biases in their choices of potential romantic partners, but the effects of the implicit biases can be reduced by carefully redesign the dating website interfaces.
more »
« less
Race-positive Design: A Generative Approach to Decolonizing Computing
Removing racial bias from algorithms or social process is necessary, but alone it is insufficient. The “bias” framework tends to treat race as unwanted noise; best when suppressed or eliminated. This attitude extends to classrooms, where an attempt to be “colorblind” leads to what Pollock calls “colormute”; fearful of even mentioning race. Just as feminists developed “sex-positive feminism” in the 1970s, we now need race-positive design. Thinking about race as positive presence—as cultural capital; histories of resistance; bindings between lands and peoples—can be a generative force in computing development. Here we detail the application and assessment of African fractals, Native American bio-computation; urban artisanal cyborgs and other hybrid forms in which race-positive technology design can make important contributions. These include community-based CS education; computational support for sustainable architecture; unalienated labor in human-machine collaboration, and other forms of generative justice.
more »
« less
- Award ID(s):
- 1930072
- PAR ID:
- 10182458
- Date Published:
- Journal Name:
- Human factors in computing systems
- ISSN:
- 1062-9432
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Previous studies have shown that artificial intelligence can be used to classify instruction-related activities in classroom videos. The automated classi- fication of human activities, however, is vulnerable to biases in which the model performs substantially better or worse for different people groups. Although algo- rithmic bias has been highlighted as an important area for research in artificial intelligence in education, there have been few studies that empirically investigate potential bias in instruction-related activity recognition systems. In this paper, we report on an investigation of potential racial and skin tone biases in the automated classification of teachers’ activities in classroom videos. We examine whether a neural network’s classification of teachers’ activities differs with respect to teacher race and skin tone and whether differently balanced training datasets affect the performance of the neural network. Our results indicate that, under ordinary class- room lighting conditions, the neural network performs equally well regardless of teacher race or skin tone. Furthermore, our results suggest the balance of the training dataset with respect to teacher skin tone and race has a small—but not necessarily positive—effect on the neural network’s performance. Our study, how- ever, also suggests the importance of quality lighting for accurate classification of teacher-related instructional activities for teachers of color. We conclude with a discussion of our mixed findings, the limitations of our study, and potential directions for future research.more » « less
-
Do students learn from video lessons presented by pedagogical agents of different racial and gender types equivalently to those delivered by a real human instructor? How do the race and gender of these agents impact students’ learning experiences and outcomes? In this between-subject design study, college students were randomly assigned to view a six 9-minute video lesson on chemical bonds, presented by pedagogical agents varying in gender (male, female) and race (Asian, Black, White), or to view the original lesson with a real human instructor. In comparing learning with a human instructor versus with a pedagogical agent of various races and genres, ANOVAs revealed no significant differences in learning outcomes (retention and transfer scores) or learner emotions, but students reported a stronger social connection with the human instructor over pedagogical agents. Students reported stronger positive emotions and social connections with female agents over male agents. Additionally, there was limited evidence of a race-matching effect, with White students showing greater positive emotion while learning with pedagogical agents of the same race. These findings highlight the limitations of pedagogical agents compared to human instructors in video lessons, while partially reflecting gender stereotypes and intergroup bias in instructor evaluations.more » « less
-
Much of the world’s population experiences some form of disability during their lifetime. Caution must be exercised while designing natural language processing (NLP) systems to prevent systems from inadvertently perpetuating ableist bias against people with disabilities, i.e., prejudice that favors those with typical abilities. We report on various analyses based on word predictions of a large-scale BERT language model. Statistically significant results demonstrate that people with disabilities can be disadvantaged. Findings also explore overlapping forms of discrimination related to interconnected gender and race identities.more » « less
-
null (Ed.)This paper turns to one of HCI’s central value systems, i.e. its commitments to usefulness and the ideal that technology enables social progress, productivity, and excellence. Specifically, we examine how the seemingly “positive” ideal to make technology “useful” – i.e. to build systems and devices that advance social and technological progress – masks various forms of violence and injustice such as colonial othering, racist exclusions, and exploitation. Drawing from ethnographic research, we show how design and computing methods from design thinking to agile theory and entrepreneurial approaches in tech production and higher education are the latest techniques in the cultivation of useful bodies on behalf of the state, the corporation, the university, and the economy. Aligning with feminist, critical race and critical computing commitments, this paper offers a genealogical approach to show how injustice and violence endure, despite and because of a narrative of progress and positive change.more » « less
An official website of the United States government

