AI recommendations shape our daily decisions and our young generation is no exception. The convenience of navigating personalized content comes with the notorious ‘‘filter bubble’’ effect, which can reduce exposure to diverse options and opinions. Children are particularly vulnerable to this due to their limited AI literacy and critical thinking skills. In this study, we explore how to engage children as co-designers to create child-centered experiences for learning AI concepts related to the filter bubble. Leveraging embodied and analogical learning theories, we co-designed an Augmented Reality (AR) application, BeeTrap, with children from underrepresented backgrounds in STEM. BeeTrap not only raises awareness of filter bubbles but also empowers children to understand recommendation system mechanisms. Our contributions include (1) insights into child-centered AI learning using embodied metaphors and analogies as educational representations of AI concepts; and (2) implications for enhancing children’s understanding of AI concepts through co-design processes.
more »
« less
"Bee and I need diversity!" Break Filter Bubbles in Recommendation Systems through Embodied AI Learning
AI recommendations influence our daily decisions. The convenience of navigating personalized content goes hand-in-hand with the notorious filter bubble effect, which may decrease people’s exposure to diverse options and opinions. Children are especially vulnerable to this due to their limited AI literacy and critical thinking skills. In this study, we propose a novel Augmented Reality (AR) application BeeTrap. It aims to not only raise children’s awareness of filter bubbles but also empower them to mitigate this ethical issue through sense-making of AI recommendation systems’ inner workings. By having children experience and break filter bubbles in a flower recommendation system, BeeTrap utilizes embodied metaphors (e.g., NEAR-FAR, ITERATION) and analogies (bee pollination) to bridge abstract AI concepts with sensory-motor experiences in familiar STEM contexts. To evaluate our design’s effectiveness and accessibility for a broad range of children, we introduced BeeTrap in a four-day summer camp for middle-school students from underrepresented backgrounds in STEM. Results from pre- and post-tests and interviews show that BeeTrap developed students’ technical understanding of AI recommendations, empowered them to break filter bubbles, and helped them foster new personal and societal perspectives around AI technologies.
more »
« less
- Award ID(s):
- 2238675
- PAR ID:
- 10519175
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400704420
- Page Range / eLocation ID:
- 44 to 61
- Format(s):
- Medium: X
- Location:
- Delft Netherlands
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Understanding the inner workings of Artificial Intelligence (AI) recommendation systems may benefit children in becoming more sensible consumers of the ever-growing information in their daily lives. It may further enable deeper reflections on related ethical issues such as the filter bubble. With limited prior knowledge in math and computing, children often find AI concepts overly abstract. Inspired by optical computation, we propose a novel tangible interface, OptiDot. Through exploratory manipulation with light beams, OptiDot supports children in learning the dot product—a building block for numerous AI algorithms—and AI recommendations through embodied learning experiences. Findings of a preliminary user study with ten middle school students indicate the effectiveness of the key embodied metaphors. We also discuss the design implications and challenges of developing optical-inspired learning tools for children.more » « less
-
Young learners today are constantly influenced by AI recommendations, from media choices to social connections. The resulting "filter bubble" can limit their exposure to diverse perspectives, which is especially problematic when they are not aware this manipulation is happening or why. To address the need to support youth AI literacy, we developed "BeeTrap", a mobile Augmented Reality (AR) learning game designed to enlighten young learners about the mechanisms and the ethical issue of recommendation systems. Transformative Experience model was integrated into learning activities design, focusing on making AI concepts relevant to students’ daily experiences, facilitating a new understanding of their digital world, and modeling real-life applications. Our pilot study with middle schoolers in a community-based program primarily investigated how transformative structured AI learning activities affected students’ understanding of recommendation systems and their overall conceptual, emotional, and behavioral changes toward AI.more » « less
-
Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world.more » « less
-
When people receive advice while making difficult decisions, they often make better decisions in the moment and also increase their knowledge in the process. However, such incidental learning can only occur when people cognitively engage with the information they receive and process this information thoughtfully. How do people process the information and advice they receive from AI, and do they engage with it deeply enough to enable learning? To answer these questions, we conducted three experiments in which individuals were asked to make nutritional decisions and received simulated AI recommendations and explanations. In the first experiment, we found that when people were presented with both a recommendation and an explanation before making their choice, they made better decisions than they did when they received no such help, but they did not learn. In the second experiment, participants first made their own choice, and only then saw a recommendation and an explanation from AI; this condition also resulted in improved decisions, but no learning. However, in our third experiment, participants were presented with just an AI explanation but no recommendation and had to arrive at their own decision. This condition led to both more accurate decisions and learning gains. We hypothesize that learning gains in this condition were due to deeper engagement with explanations needed to arrive at the decisions. This work provides some of the most direct evidence to date that it may not be sufficient to provide people with AI-generated recommendations and explanations to ensure that people engage carefully with the AI-provided information. This work also presents one technique that enables incidental learning and, by implication, can help people process AI recommendations and explanations more carefully.more » « less
An official website of the United States government

