skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Potential of Diverse Youth as Stakeholders in Identifying and Mitigating Algorithmic Bias for a Future of Fairer AI
Youth regularly use technology driven by artificial intelligence (AI). However, it is increasingly well-known that AI can cause harm on small and large scales, especially for those underrepresented in tech fields. Recently, users have played active roles in surfacing and mitigating harm from algorithmic bias. Despite being frequent users of AI, youth have been under-explored as potential contributors and stakeholders to the future of AI. We consider three notions that may be at the root of youth facing barriers to playing an active role in responsible AI, which are youth (1) cannot understand the technical aspects of AI, (2) cannot understand the ethical issues around AI, and (3) need protection from serious topics related to bias and injustice. In this study, we worked with youth (N = 30) in first through twelfth grade and parents (N = 6) to explore how youth can be part of identifying algorithmic bias and designing future systems to address problematic technology behavior. We found that youth are capable of identifying and articulating algorithmic bias, often in great detail. Participants suggested different ways users could give feedback for AI that reflects their values of diversity and inclusion. Youth who may have less experience with computing or exposure to societal structures can be supported by peers or adults with more of this knowledge, leading to critical conversations about fairer AI. This work illustrates youths' insights, suggesting that they should be integrated in building a future of responsible AI.  more » « less
Award ID(s):
1811086
PAR ID:
10503957
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Purpose The purpose of this paper is to offer a critical analysis of talent acquisition software and its potential for fostering equity in the hiring process for underrepresented IT professionals. The under-representation of women, African-American and Latinx professionals in the IT workforce is a longstanding issue that contributes to and is impacted by algorithmic bias. Design/methodology/approach Sources of algorithmic bias in talent acquisition software are presented. Feminist design thinking is presented as a theoretical lens for mitigating algorithmic bias. Findings Data are just one tool for recruiters to use; human expertise is still necessary. Even well-intentioned algorithms are not neutral and should be audited for morally and legally unacceptable decisions. Feminist design thinking provides a theoretical framework for considering equity in the hiring decisions made by talent acquisition systems and their users. Social implications This research implies that algorithms may serve to codify deep-seated biases, making IT work environments just as homogeneous as they are currently. If bias exists in talent acquisition software, the potential for propagating inequity and harm is far more significant and widespread due to the homogeneity of the specialists creating artificial intelligence (AI) systems. Originality/value This work uses equity as a central concept for considering algorithmic bias in talent acquisition. Feminist design thinking provides a framework for fostering a richer understanding of what fairness means and evaluating how AI software might impact marginalized populations. 
    more » « less
  2. People form perceptions and interpretations of AI through external sources prior to their interaction with new technology. For example, shared anecdotes and media stories influence prior beliefs that may or may not accurately represent the true nature of AI systems. We hypothesize people's prior perceptions and beliefs will affect human-AI interactions and usage behaviors when using new applications. This paper presents a user experiment to explore the interplay between user's pre-existing beliefs about AI technology, individual differences, and previously established sources of cognitive bias from first impressions with an interactive AI application. We employed questionnaire measures as features to categorize users into profiles based on their prior beliefs and attitudes about technology. In addition, participants were assigned to one of two controlled conditions designed to evoke either positive or negative first impressions during an AI-assisted judgment task using an interactive application. The experiment and results provide empirical evidence that profiling users by surveying them on their prior beliefs and differences can be a beneficial approach for bias (and/or unanticipated usage) mitigation instead of seeking one-size-fits-all solutions. 
    more » « less
  3. Recent increases in self-harm and suicide rates among youth have coincided with prevalent social media use; therefore, making these sensitive topics of critical importance to the HCI research community. We analyzed 1,224 direct message conversations (DMs) from 151 young Instagram users (ages 13-21), who engaged in private conversations using self-harm and suicide-related language. We found that youth discussed their personal experiences, including imminent thoughts of suicide and/or self-harm, as well as their past attempts and recovery. They gossiped about others, including complaining about triggering content and coercive threats of self-harm and suicide but also tried to intervene when a friend was in danger. Most of the conversations involved suicide or self-harm language that did not indicate the intent to harm but instead used hyperbolical language or humor. Our results shed light on youth perceptions, norms, and experiences of self-harm and suicide to inform future efforts towards risk detection and prevention. ContentWarning: This paper discusses the sensitive topics of self-harm and suicide. Reader discretion is advised. 
    more » « less
  4. Inequitable software is a common problem. Bias may be caused by developers, or even software users. As a society, it is crucial that we understand and identify the causes and implications of software bias from both users and the software itself. To address the problems of inequitable software, it is essential that we inform and motivate the next generation of software developers regarding bias and its adverse impacts. However, research shows that there is a lack of easily adoptable ethics-focused educational material to support this effort.To address the problem of inequitable software, we created an easily adoptable, self-contained experiential activity that is designed to foster student interest in software ethics, with a specific emphasis on AI/ML bias. This activity involves participants selecting fictitious teammates based solely on their appearance. The participant then experiences bias either against themselves or a teammate by the activity’s fictitious AI. The created lab was then utilized in this study involving 173 real-world users (age 18-51+) to better understand user bias.The primary findings of our study include: I) Participants from minority ethnic groups have stronger feeling regarding being impacted by inequitable software/AI, II) Participants with higher interest in AI/ML have a higher belief for the priority of unbiased software, III) Users do not act in an equitable manner, as avatars with ‘dark’ skin color are less likely to be selected, and IV) Participants from different demographic groups exhibit similar behavior bias. The created experiential lab activity may be executed using only a browser and internet connection, and is publicly available on our project website: https://all.rit.edu. 
    more » « less
  5. Background There is increased interest in using artificial intelligence (AI) to provide participation-focused pediatric re/habilitation. Existing reviews on the use of AI in participation-focused pediatric re/habilitation focus on interventions and do not screen articles based on their definition of participation. AI-based assessments may help reduce provider burden and can support operationalization of the construct under investigation. To extend knowledge of the landscape on AI use in participation-focused pediatric re/habilitation, a scoping review on AI-based participation-focused assessments is needed. Objective To understand how the construct of participation is captured and operationalized in pediatric re/habilitation using AI. Methods We conducted a scoping review of literature published in Pubmed, PsycInfo, ERIC, CINAHL, IEEE Xplore, ACM Digital Library, ProQuest Dissertation and Theses, ACL Anthology, AAAI Digital Library, and Google Scholar. Documents were screened by 2–3 independent researchers following a systematic procedure and using the following inclusion criteria: (1) focuses on capturing participation using AI; (2) includes data on children and/or youth with a congenital or acquired disability; and (3) published in English. Data from included studies were extracted [e.g., demographics, type(s) of AI used], summarized, and sorted into categories of participation-related constructs. Results Twenty one out of 3,406 documents were included. Included assessment approaches mainly captured participation through annotated observations ( n = 20; 95%), were administered in person ( n = 17; 81%), and applied machine learning ( n = 20; 95%) and computer vision ( n = 13; 62%). None integrated the child or youth perspective and only one included the caregiver perspective. All assessment approaches captured behavioral involvement, and none captured emotional or cognitive involvement or attendance. Additionally, 24% ( n = 5) of the assessment approaches captured participation-related constructs like activity competencies and 57% ( n = 12) captured aspects not included in contemporary frameworks of participation. Conclusions Main gaps for future research include lack of: (1) research reporting on common demographic factors and including samples representing the population of children and youth with a congenital or acquired disability; (2) AI-based participation assessment approaches integrating the child or youth perspective; (3) remotely administered AI-based assessment approaches capturing both child or youth attendance and involvement; and (4) AI-based assessment approaches aligning with contemporary definitions of participation. 
    more » « less