Abstract This study addressed whether combining tinkering with digital storytelling (i.e., narrating and reflecting about experiences to an imagined audience) can engender engineering learning opportunities. Eighty‐four families with 5‐ to 10‐year‐old (M = 7.69) children (48% female children; 57% White, 11% Asian, 6% Black) watched a video introducing a tinkering activity and were randomly assigned either to a digital storytelling condition or a no digital storytelling condition during tinkering. After tinkering, families reflected on their tinkering experience and were randomly assigned to either engage in digital storytelling or not. Children in the digital storytelling condition during tinkering spoke most to an imagined audience during tinkering, talked most about engineering at reflection, and remembered the most information about the experience weeks later.
more »
« less
Acting as Inverse Inverse Planning
Great storytellers know how to take us on a journey. They direct characters to act—not necessarily in the most rational way—but rather in a way that leads to interesting situations, and ultimately creates an impactful experience for audience members looking on. If audience experience is what matters most, then can we help artists and animators directly craft such experiences, independent of the concrete character actions needed to evoke those experiences? In this paper, we offer a novel computational framework for such tools. Our key idea is to optimize animations with respect to simulated audience members’ experiences. To simulate the audience, we borrow an established principle from cognitive science: that human social intuition can be modeled as “inverse planning,” the task of inferring an agent’s (hidden) goals from its (observed) actions. Building on this model, we treat storytelling as “inverse inverse planning,” the task of choosing actions to manipulate an inverse planner’s inferences. Our framework is grounded in literary theory, naturally capturing many storytelling elements from first principles. We give a series of examples to demonstrate this, with supporting evidence from human subject studies.
more »
« less
- PAR ID:
- 10467291
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400701597
- Page Range / eLocation ID:
- 1 to 12
- Format(s):
- Medium: X
- Location:
- Los Angeles CA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
How do people use human-made objects (artifacts) to learn about the people and actions that created them? We test the richness of people’s reasoning in this domain, focusing on the task of judging whether social transmission has occurred (i.e. whether one person copied another). We develop a formal model of this reasoning process as a form of rational inverse planning, which predicts that rather than solely focusing on artifacts’ similarity to judge whether copying occurred, people should also take into account availability constraints (the materials available), and functional constraints (which materials work). Using an artifact-building task where two characters build tools to solve a puzzle box, we find that this inverse planning model predicts trial-by-trial judgments, whereas simpler models that do not consider availability or functional constraints do not. This suggests people use a process like inverse planning to make flexible inferences from artifacts’ features about the source of design ideas.more » « less
-
This Research-to-Practice Full Paper describes the implementation of integrated reflective activities in two computer engineering courses. Reflective activities contribute to student learning and professional development. Instructional team members have been examining the need and opportunities to deepen learning by integrating reflective activities into problem-solving experiences. We implemented reflective activities using a coordinated framework for a modified Kolbian cycle. The framework consists of reflection-for-action, reflection-in-action, reflection-on-action, and composted reflections. Reflection-for-action takes place before the experience and involves thinking about and planning future actions. Reflection-in-action takes place during the experience while actively problem-solving. Reflection-on-action takes place after the problem-solving experience. Composting involves revisiting past experiences and reflections to inform future planning. We describe the reflective activities in the context of the coordinated framework, including strategies to support reflection and increase the likelihood of engagement and success. We conclude with an analysis of the activities using the CPREE framework for reflection pathways.more » « less
-
Task and motion planning represents a powerful set of hybrid planning methods that combine reasoning over discrete task domains and continuous motion generation. Traditional reasoning necessitates task domain models and enough information to ground actions to motion planning queries. Gaps in this knowledge often arise from sources like occlusion or imprecise modeling. This work generates task and motion plans that include actions cannot be fully grounded at planning time. During execution, such an action is handled by a provided human designed or learned closed-loop behavior. Execution combines offline planned motions and online behaviors till reaching the task goal. Failures of behaviors are fed back as constraints to find new plans. Forty real-robot trials and motivating demonstrations are performed to evaluate the proposed framework and compare against state-of-the-art. Results show faster execution time, less number of actions, and more success in problems where diverse gaps arise. The experiment data is shared for researchers to simulate these settings. The work shows promise in expanding the applicable class of realistic partially grounded problems that robots can address.more » « less
-
Estimating the unknown reward functions driving agents' behavior is a central challenge in inverse games and reinforcement learning. This paper introduces a unified framework for reward function recovery in two-player zero-sum matrix games and Markov games with entropy regularization. Given observed player strategies and actions, we aim to reconstruct the underlying reward functions. This task is challenging due to the inherent ambiguity of inverse problems, the non-uniqueness of feasible rewards, and limited observational data coverage. To address these challenges, we establish reward function identifiability using the quantal response equilibrium (QRE) under linear assumptions. Building on this theoretical foundation, we propose an algorithm to learn reward from observed actions, designed to capture all plausible reward parameters by constructing confidence sets. Our algorithm works in both static and dynamic settings and is adaptable to incorporate other methods, such as Maximum Likelihood Estimation (MLE). We provide strong theoretical guarantees for the reliability and sample-efficiency of our algorithm. Empirical results demonstrate the framework’s effectiveness in accurately recovering reward functions across various scenarios, offering new insights into decision-making in competitive environments.more » « less
An official website of the United States government
