skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


Title: The Potential of Diverse Youth as Stakeholders in Identifying and Mitigating Algorithmic Bias for a Future of Fairer AI

Youth regularly use technology driven by artificial intelligence (AI). However, it is increasingly well-known that AI can cause harm on small and large scales, especially for those underrepresented in tech fields. Recently, users have played active roles in surfacing and mitigating harm from algorithmic bias. Despite being frequent users of AI, youth have been under-explored as potential contributors and stakeholders to the future of AI. We consider three notions that may be at the root of youth facing barriers to playing an active role in responsible AI, which are youth (1) cannot understand the technical aspects of AI, (2) cannot understand the ethical issues around AI, and (3) need protection from serious topics related to bias and injustice. In this study, we worked with youth (N = 30) in first through twelfth grade and parents (N = 6) to explore how youth can be part of identifying algorithmic bias and designing future systems to address problematic technology behavior. We found that youth are capable of identifying and articulating algorithmic bias, often in great detail. Participants suggested different ways users could give feedback for AI that reflects their values of diversity and inclusion. Youth who may have less experience with computing or exposure to societal structures can be supported by peers or adults with more of this knowledge, leading to critical conversations about fairer AI. This work illustrates youths' insights, suggesting that they should be integrated in building a future of responsible AI.

 
more » « less
Award ID(s):
1811086
PAR ID:
10503957
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Purpose The purpose of this paper is to offer a critical analysis of talent acquisition software and its potential for fostering equity in the hiring process for underrepresented IT professionals. The under-representation of women, African-American and Latinx professionals in the IT workforce is a longstanding issue that contributes to and is impacted by algorithmic bias. Design/methodology/approach Sources of algorithmic bias in talent acquisition software are presented. Feminist design thinking is presented as a theoretical lens for mitigating algorithmic bias. Findings Data are just one tool for recruiters to use; human expertise is still necessary. Even well-intentioned algorithms are not neutral and should be audited for morally and legally unacceptable decisions. Feminist design thinking provides a theoretical framework for considering equity in the hiring decisions made by talent acquisition systems and their users. Social implications This research implies that algorithms may serve to codify deep-seated biases, making IT work environments just as homogeneous as they are currently. If bias exists in talent acquisition software, the potential for propagating inequity and harm is far more significant and widespread due to the homogeneity of the specialists creating artificial intelligence (AI) systems. Originality/value This work uses equity as a central concept for considering algorithmic bias in talent acquisition. Feminist design thinking provides a framework for fostering a richer understanding of what fairness means and evaluating how AI software might impact marginalized populations. 
    more » « less
  2. People form perceptions and interpretations of AI through external sources prior to their interaction with new technology. For example, shared anecdotes and media stories influence prior beliefs that may or may not accurately represent the true nature of AI systems. We hypothesize people's prior perceptions and beliefs will affect human-AI interactions and usage behaviors when using new applications. This paper presents a user experiment to explore the interplay between user's pre-existing beliefs about AI technology, individual differences, and previously established sources of cognitive bias from first impressions with an interactive AI application. We employed questionnaire measures as features to categorize users into profiles based on their prior beliefs and attitudes about technology. In addition, participants were assigned to one of two controlled conditions designed to evoke either positive or negative first impressions during an AI-assisted judgment task using an interactive application. The experiment and results provide empirical evidence that profiling users by surveying them on their prior beliefs and differences can be a beneficial approach for bias (and/or unanticipated usage) mitigation instead of seeking one-size-fits-all solutions. 
    more » « less
  3. Inequitable software is a common problem. Bias may be caused by developers, or even software users. As a society, it is crucial that we understand and identify the causes and implications of software bias from both users and the software itself. To address the problems of inequitable software, it is essential that we inform and motivate the next generation of software developers regarding bias and its adverse impacts. However, research shows that there is a lack of easily adoptable ethics-focused educational material to support this effort.To address the problem of inequitable software, we created an easily adoptable, self-contained experiential activity that is designed to foster student interest in software ethics, with a specific emphasis on AI/ML bias. This activity involves participants selecting fictitious teammates based solely on their appearance. The participant then experiences bias either against themselves or a teammate by the activity’s fictitious AI. The created lab was then utilized in this study involving 173 real-world users (age 18-51+) to better understand user bias.The primary findings of our study include: I) Participants from minority ethnic groups have stronger feeling regarding being impacted by inequitable software/AI, II) Participants with higher interest in AI/ML have a higher belief for the priority of unbiased software, III) Users do not act in an equitable manner, as avatars with ‘dark’ skin color are less likely to be selected, and IV) Participants from different demographic groups exhibit similar behavior bias. The created experiential lab activity may be executed using only a browser and internet connection, and is publicly available on our project website: https://all.rit.edu. 
    more » « less
  4. Recent increases in self-harm and suicide rates among youth have coincided with prevalent social media use; therefore, making these sensitive topics of critical importance to the HCI research community. We analyzed 1,224 direct message conversations (DMs) from 151 young Instagram users (ages 13-21), who engaged in private conversations using self-harm and suicide-related language. We found that youth discussed their personal experiences, including imminent thoughts of suicide and/or self-harm, as well as their past attempts and recovery. They gossiped about others, including complaining about triggering content and coercive threats of self-harm and suicide but also tried to intervene when a friend was in danger. Most of the conversations involved suicide or self-harm language that did not indicate the intent to harm but instead used hyperbolical language or humor. Our results shed light on youth perceptions, norms, and experiences of self-harm and suicide to inform future efforts towards risk detection and prevention. ContentWarning: This paper discusses the sensitive topics of self-harm and suicide. Reader discretion is advised. 
    more » « less
  5. The rapid expansion of Artificial Intelligence (AI) necessitates a need for educating students to become knowledgeable of AI and aware of its interrelated technical, social, and human implications. The latter (ethics) is particularly important to K-12 students because they may have been interacting with AI through everyday technology without realizing it. They may be targeted by AI generated fake content on social media and may have been victims of algorithm bias in AI applications of facial recognition and predictive policing. To empower students to recognize ethics related issues of AI, this paper reports the design and implementation of a suite of ethics activities embedded in the Developing AI Literacy (DAILy) curriculum. These activities engage students in investigating bias of existing technologies, experimenting with ways to mitigate potential bias, and redesigning the YouTube recommendation system in order to understand different aspects of AI-related ethics issues. Our observations of implementing these lessons among adolescents and exit interviews show that students were highly engaged and became aware of potential harms and consequences of AI tools in everyday life after these ethics lessons. 
    more » « less