skip to main content


Title: In the Black Mirror: Youth Investigations Into Artificial Intelligence: In the Black MirrorYouth Investigations Into Artificial Intelligence
Over the past two decades, innovations powered by artificial intelligence (AI) have extended into nearly all facets of human experience. Our ethnographic research suggests that while young people sense they can't “trust” AI, many are not sure how it works or how much control they have over its growing role in their lives. In this study, we attempt to answer the following questions: 1) What can we learn about young people's understandings of AI when they produce media with and about it? 2) What are the design features of an ethics-centered pedagogy that promotes STEM engagement via AI? To answer these questions, we co-developed and documented three projects at YR Media, a national network of youth journalists and artists who create multimedia for public distribution. Participants are predominantly youth of color and those contending with economic and other barriers to full participation in STEM fields. Findings showed that by creating a learning ecology that centered the cultures and experiences of its learners while leveraging familiar tools for critical analysis, youth deepened their understanding of AI. Our study also showed that providing opportunities for youth to produce ethics-centered Interactive stories interrogating invisibilized AI functionalities, and to release those stories to the public, empowered them to creatively express their understandings and apprehensions about AI.  more » « less
Award ID(s):
1906895
PAR ID:
10375769
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ACM Transactions on Computing Education
ISSN:
1946-6226
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Artificial intelligence (AI) tools and technologies are increasingly prevalent in society. Many teens interact with AI devices on a daily basis but often have a limited understanding of how AI works, as well as how it impacts society more broadly. It is critical to develop youths’ understanding of AI, cultivate ethical awareness, and support diverse youth in pursuing computer science to help ensure future development of more equitable AI technologies. Here, we share our experiences developing and remotely facilitating an interdisciplinary AI ethics program for secondary students designed to increase teens’ awareness and understanding of AI and its societal impacts. Students discussed stories with embedded ethical dilemmas, engaged with AI media and simulations, and created digital products to express their stance on an AI ethics issue. Across four iterations in formal and informal settings, we found students to be engaged in AI stories and invested in learning about AI and its societal impacts. Short stories were effective in raising awareness, focusing discussion and supporting students in developing a more nuanced understanding of AI ethics issues, such as fairness, bias and privacy. 
    more » « less
  2. As artificial intelligence (AI) profoundly reshapes our personal and professional lives, there are growing calls to support pre-college aged youth as they develop capacity to engage critically and productively with AI. While efforts to introduce AI concepts to pre-college aged youth have largely focused on older teens, there is growing recognition of the importance of developing AI literacy among younger children. Today’s youth already encounter and use AI regularly, but they might not yet be aware of its role, limitations, risks, or purpose in a particular encounter, and may not be positioned to question whether it should be doing what it’s doing. In response to this critical moment to develop AI learning experiences that can support children at this age, researchers and learning designers at the University of California’s Lawrence Hall of Science, in collaboration with AI developers at the University of Southern California’s Institute for Creative Technologies, have been iteratively developing and studying a series of interactive learning experiences for public science centers and similar out-of-school settings. The project is funded through a grant by the National Science Foundation and the resulting exhibit, The Virtually Human Experience (VHX), represents one of the first interactive museum exhibits in the United States designed explicitly to support young children and their families in developing understanding of AI. The coordinated experiences in VHX include both digital (computer-based) and non-digital (“unplugged”) activities designed to engage children (ages 7-12) and their families in learning about AI. In this paper, we describe emerging insights from a series of case studies that track small groups of museum visitors (e.g. a parent and two children) as they interact with the exhibit. The case studies reveal opportunities and challenges associated with designing AI learning experiences for young children in a free-choice environment like a public science center. In particular, we focus on three themes emerging from our analyses of case data: 1) relationships between design elements and collaborative discourse within intergenerational groups (i.e., families and other adult-child pairings); 2) relationships between design elements and impromptu visitor experimentation within the exhibit space; and 3) challenges in designing activities with a low threshold for initial engagement such that even the youngest visitors can engage meaningfully with the activity. Findings from this study are directly relevant to support researchers and learning designers engaged in rapidly expanding efforts to develop AI learning opportunities for youth, and are likely to be of interest to a broad range of researchers, designers, and practitioners as society encounters this transformative technology and its applications become increasingly integral to how we live and work. 
    more » « less
  3. The rapid expansion of Artificial Intelligence (AI) necessitates a need for educating students to become knowledgeable of AI and aware of its interrelated technical, social, and human implications. The latter (ethics) is particularly important to K-12 students because they may have been interacting with AI through everyday technology without realizing it. They may be targeted by AI generated fake content on social media and may have been victims of algorithm bias in AI applications of facial recognition and predictive policing. To empower students to recognize ethics related issues of AI, this paper reports the design and implementation of a suite of ethics activities embedded in the Developing AI Literacy (DAILy) curriculum. These activities engage students in investigating bias of existing technologies, experimenting with ways to mitigate potential bias, and redesigning the YouTube recommendation system in order to understand different aspects of AI-related ethics issues. Our observations of implementing these lessons among adolescents and exit interviews show that students were highly engaged and became aware of potential harms and consequences of AI tools in everyday life after these ethics lessons. 
    more » « less
  4. Mahmoud, Ali B. (Ed.)

    Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and “100 years from now.” (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately.

     
    more » « less
  5. Regardless of how much data artificial intelligence agents have available, agents will inevitably encounter previously unseen situations in real-world deployments. Reacting to novel situations by acquiring new information from other people—socially situated learning—is a core faculty of human development. Unfortunately, socially situated learning remains an open challenge for artificial intelligence agents because they must learn how to interact with people to seek out the information that they lack. In this article, we formalize the task of socially situated artificial intelligence—agents that seek out new information through social interactions with people—as a reinforcement learning problem where the agent learns to identify meaningful and informative questions via rewards observed through social interaction. We manifest our framework as an interactive agent that learns how to ask natural language questions about photos as it broadens its visual intelligence on a large photo-sharing social network. Unlike active-learning methods, which implicitly assume that humans are oracles willing to answer any question, our agent adapts its behavior based on observed norms of which questions people are or are not interested to answer. Through an 8-mo deployment where our agent interacted with 236,000 social media users, our agent improved its performance at recognizing new visual information by 112%. A controlled field experiment confirmed that our agent outperformed an active-learning baseline by 25.6%. This work advances opportunities for continuously improving artificial intelligence (AI) agents that better respect norms in open social environments. 
    more » « less