skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 18, 2026

Title: SciStory: Designing AI-Supported Inquiry in Science Learning Games
This design case explores how an AI-supported, narrative-centered science learning game (SciStory: Pollinators) was designed over multiple iterations to support middle schoolers’ socioscientific learning, engagement, and persuasive writing. The case highlights how AI-driven conversational agents were designed to support student-led socioscientific inquiry, and the tensions our team explored as we integrated agents into a story game about community food systems, pollinators, and neighborhood land use.  more » « less
Award ID(s):
2112635
PAR ID:
10660220
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Association for Educational Communications and Technology
Date Published:
Journal Name:
International Journal of Designs for Learning
Volume:
16
Issue:
2
ISSN:
2159-449X
Page Range / eLocation ID:
170 to 181
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Deep reinforcement learning has learned to play many games well, but failed on others. To better characterize the modes and reasons of failure of deep reinforcement learners, we test the widely used Asynchronous Actor-Critic (A2C) algorithm on four deceptive games, which are specially designed to provide challenges to game-playing agents. These games are implemented in the General Video Game AI framework, which allows us to compare the behavior of reinforcement learning-based agents with planning agents based on tree search. We find that several of these games reliably deceive deep reinforcement learners, and that the resulting behavior highlights the shortcomings of the learning algorithm. The particular ways in which agents fail differ from how planning-based agents fail, further illuminating the character of these algorithms. We propose an initial typology of deceptions which could help us better understand pitfalls and failure modes of (deep) reinforcement learning. 
    more » « less
  2. Young learners today are constantly influenced by AI recommendations, from media choices to social connections. The resulting "filter bubble" can limit their exposure to diverse perspectives, which is especially problematic when they are not aware this manipulation is happening or why. To address the need to support youth AI literacy, we developed "BeeTrap", a mobile Augmented Reality (AR) learning game designed to enlighten young learners about the mechanisms and the ethical issue of recommendation systems. Transformative Experience model was integrated into learning activities design, focusing on making AI concepts relevant to students’ daily experiences, facilitating a new understanding of their digital world, and modeling real-life applications. Our pilot study with middle schoolers in a community-based program primarily investigated how transformative structured AI learning activities affected students’ understanding of recommendation systems and their overall conceptual, emotional, and behavioral changes toward AI. 
    more » « less
  3. This paper describes the design of a collaborative game, called Rainbow Agents, that has been created to promote computational literacy through play. In Rainbow Agents, players engage directly with computational concepts by programming agents to plant and maintain a shared garden space. Rainbow Agents was designed to encourage collaborative play and shared sense-making from groups who are typically underrepresented in computer science. In this paper, we discuss how that design goal informed the mechanics of the game, and how each of those mechanics affords different goal alignments towards gameplay (e.g. competitive versus collaborative). We apply this framework using a case from an early implementation, describing how player goal alignments towards the game changed within the course of a single play session. We conclude by discussing avenues of future work as we begin data collection in two heavily diverse science museum locations. 
    more » « less
  4. Observations abound about the power of visual imagery in human intelligence, from how Nobel prize-winning physicists make their discoveries to how children understand bedtime stories. These observations raise an important question for cognitive science, which is, what are the computations taking place in someone’s mind when they use visual imagery? Answering this question is not easy and will require much continued research across the multiple disciplines of cognitive science. Here, we focus on a related and more circumscribed question from the perspective of artificial intelligence (AI): If you have an intelligent agent that uses visual imagery-based knowledge representations and reasoning operations, then what kinds of problem solving might be possible, and how would such problem solving work? We highlight recent progress in AI toward answering these questions in the domain of visuospatial reasoning, looking at a case study of how imagery-based artificial agents can solve visuospatial intelligence tests. In particular, we first examine several variations of imagery-based knowledge representations and problem-solving strategies that are sufficient for solving problems from the Raven’s Progressive Matrices intelligence test. We then look at how artificial agents, instead of being designed manually by AI researchers, might learn portions of their own knowledge and reasoning procedures from experience, including learning visuospatial domain knowledge, learning and generalizing problem-solving strategies, and learning the actual definition of the task in the first place. 
    more » « less
  5. There are many initiatives that teach Artificial Intelligence (AI) literacy to K-12 students. Most downsize college-level instructional materials to grade-level appropriate formats, overlooking students' unique perspectives in the design of curricula. To investigate the use of educational games as a vehicle for uncovering youth's understanding of AI instruction, we co-designed games with 39 Black, Hispanic, and Asian high school girls and non-binary youth to create engaging learning materials for their peers. We conducted qualitative analyses on the designed game artifacts, student discourse, and their feedback on the efficacy of learning activities. This study highlights the benefits of co-design and learning games to uncover students' understanding and ability to apply AI concepts in game-based learning, their emergent perspectives of AI, and the prior knowledge that informs their game design choices. Our research uncovers students' AI misconceptions and informs the design of educational games and grade-level appropriate AI instruction. 
    more » « less