skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1811086

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 10, 2026
  2. Free, publicly-accessible full text available April 25, 2026
  3. Free, publicly-accessible full text available February 12, 2026
  4. Youth regularly use technology driven by artificial intelligence (AI). However, it is increasingly well-known that AI can cause harm on small and large scales, especially for those underrepresented in tech fields. Recently, users have played active roles in surfacing and mitigating harm from algorithmic bias. Despite being frequent users of AI, youth have been under-explored as potential contributors and stakeholders to the future of AI. We consider three notions that may be at the root of youth facing barriers to playing an active role in responsible AI, which are youth (1) cannot understand the technical aspects of AI, (2) cannot understand the ethical issues around AI, and (3) need protection from serious topics related to bias and injustice. In this study, we worked with youth (N = 30) in first through twelfth grade and parents (N = 6) to explore how youth can be part of identifying algorithmic bias and designing future systems to address problematic technology behavior. We found that youth are capable of identifying and articulating algorithmic bias, often in great detail. Participants suggested different ways users could give feedback for AI that reflects their values of diversity and inclusion. Youth who may have less experience with computing or exposure to societal structures can be supported by peers or adults with more of this knowledge, leading to critical conversations about fairer AI. This work illustrates youths' insights, suggesting that they should be integrated in building a future of responsible AI. 
    more » « less