- Award ID(s):
- 1906854
- NSF-PAR ID:
- 10293590
- Date Published:
- Journal Name:
- IDC '21: Interaction Design and Children
- Page Range / eLocation ID:
- 283 to 293
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Social robot co-design requires aiding users as they imagine these novel devices within their everyday lives and enabling designers to understand and address users’ experiences. This paper presents the exploratory development and evaluation of a role-playing game aimed at identifying the desired features and uses of a social robot that can assist people diagnosed with depression. Participants (n = 16) played the game as a character with depression, designed a companion robot for that character, and chose reactions to daily challenges. Though participants initially selected robot capabilities based on their own needs, after the game they identified alternative designs that would better address daily challenges faced by individuals with depression. We discuss aspects of the game that allowed participants to understand how various robot characteristics can address the experience of depression and suggest how role-playing games can support users and designers in identifying beneficial features and uses of emerging robotic technologies.more » « less
-
Educational video games can engage students in authentic STEM practices, which often involve visual representations. In particular, because most interactions within video games are mediated through visual representations, video games provide opportunities for students to experience disciplinary practices with visual representations. Prior research on learning with visual representations in non-game contexts suggests that visual representations may confuse students if they lack prerequisite representational-competencies. However, it is unclear how this research applies to game environments. To address this gap, we investigated the role of representational-competencies for students’ learning from video games. We first conducted a single-case study of a high-performing undergraduate student playing an astronomy game as an assignment in an astronomy course. We found that this student had difficulties making sense of the visual representations in the game. We interpret these difficulties as indicating a lack of representational-competencies. Further, these difficulties seemed to lead to the student’s inability to relate the game experiences to the content covered in his astronomy course. A second study investigated whether interventions that have proven successful in structured learning environments to support representational-competencies would enhance students’ learning from visual representations in the video game. We randomly assigned 45 students enrolled in an undergraduate course to two conditions. Students either received representational-competency support while playing the astronomy game or they did not receive this support. Results showed no effects of representational-competency supports. This suggests that instructional designs that are effective for representational-competency supports in structured learning environments may not be effective for educational video games. We discuss implications for future research, for designers of educational games, and for educators.more » « less
-
Abstract We investigated how families experienced immersion as they collaboratively made sense of geologic time and geoscience processes during a place-based, learning-on-the-move (LOTM) experience mediated by a mobile augmented reality (MAR) app. Our team developed an MAR app,
Time Explorers , that focused on how rock-water interactions shaped Appalachia over millions of years. Data were collected at the Children’s Garden at the Arboretum at Penn State. Data sources were videos of app usage, point-of-view camera recordings with audio capturing family conversations, and interviews from 17 families (51 people). The analytical technique was interaction analysis, in which episodes of family sense-making were identified and developed into qualitative vignettes focused on how immersion did or did not support learning about geoscience and geologic time. We analyzed how design elements supported sensory, actional, narrative, and social immersion through photo-taking, discussion prompts, and augmented reality visualizations. Findings showed that sensory and social immersion supported sense-making conversations and observational inquiry, while narrative and actional immersion supported deep family engagement with the geoscience content. At many micro-sites of learning, families engaged in multiple immersive processes where conversations, observational inquiry, and deep engagement with the geoscience came together during LOTM. This analysis contributes to the CSCL literature on theory related to LOTM in outdoor informal settings, while also providing design conjectures in an immersive, family-centered, place-based LOTM framework. -
Autonomous educational social robots can be used to help promote literacy skills in young children. Such robots, which emulate the emotive, perceptual, and empathic abilities of human teachers, are capable of replicating some of the benefits of one-on-one tutoring from human teachers, in part by leveraging individual student’s behavior and task performance data to infer sophisticated models of their knowledge. These student models are then used to provide personalized educational experiences by, for example, determining the optimal sequencing of curricular material. In this paper, we introduce an integrated system for autonomously analyzing and assessing children’s speech and pronunciation in the context of an interactive word game between a social robot and a child. We present a novel game environment and its computational formulation, an integrated pipeline for capturing and analyzing children’s speech in real-time, and an autonomous robot that models children’s word pronunciation via Gaussian Process Regression (GPR), augmented with an Active Learning protocol that informs the robot’s behavior. We show that the system is capable of autonomously assessing children’s pronunciation ability, with ground truth determined by a post-experiment evaluation by human raters. We also compare phoneme- and word-level GPR models and discuss trade-offs of each approach in modeling children’s pronunciation. Finally, we describe and analyze a pipeline for automatic analysis of children’s speech and pronunciation, including an evaluation of Speech Ace as a tool for future development of autonomous, speech-based language tutors.more » « less
-
Social robots are emerging as learning companions for children, and research shows that they facilitate the development of interest and learning even through brief interactions. However, little is known about how such technologies might support these goals in authentic environments over long-term periods of use and interaction. We designed a learning companion robot capable of supporting children reading popular-science books by expressing social and informational commentaries. We deployed the robot in homes of 14 families with children aged 10–12 for four weeks during the summer. Our analysis revealed critical factors that affected children’s long-term engagement and adoption of the robot, including external factors such as vacations, family visits, and extracurricular activities; family/parental involvement; and children’s individual interests. We present four in-depth cases that illustrate these factors and demonstrate their impact on children’s reading experiences and discuss the implications of our findings for robot design.more » « less