- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- HCI in Games: Serious and Immersive Games. HCII 2021. Lecture Notes in Computer Science
- Sponsoring Org:
- National Science Foundation
More Like this
This paper presents an expansion to the Abstract Meaning Representation (AMR) annotation schema that captures fine-grained semantically and pragmatically derived spatial information in grounded corpora. We describe a new lexical category conceptualization and set of spatial annotation tools built in the context of a multimodal corpus consisting of 185 3D structure-building dialogues between a human architect and human builder in Minecraft. Minecraft provides a particularly beneficial spatial relation-elicitation environment because it automatically tracks locations and orientations of objects and avatars in the space according to an absolute Cartesian coordinate system. Through a two-step process of sentence-level and document-level annotation designed to capture implicit information, we leverage these coordinates and bearings in the AMRs in combination with spatial framework annotation to ground the spatial language in the dialogues to absolute space.
Spatial reasoning is an important skillset that is malleable to training interventions. One possible context for intervention is the popular video game Minecraft. Minecraft encourages users to engage in spatial manipulation of 3D objects. However, few papers have chronicled any in-game practices that might evidence spatial reasoning, or how we might study its development through the game. In this paper, we report on 11 middle school students’ spatial reasoning practices while playing Minecraft. We use audio and video data of student gameplay to delineate five in-game practices that align with spatial reasoning. We expand on a student case study, to explicate these practices. The identified practices may be beneficial for studying spatial reasoning development in game-based environments and contribute to a growing body of research on ways games support development of important and transferable skills.
The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other basedmore »
Šķilters, J. ; Newcombe, N. ; Uttal, D. (Ed.)As excitement for Minecraft continues to grow, we consider its potential to function as an engaging environment for practicing and studying spatial reasoning. To support this exposition, we describe a glimpse of our current analysis of spatial reasoning skills in Minecraft. Twenty university students participated in a laboratory study that asked them to recreate three existing buildings in Minecraft. Screen captures of user actions, together with eye tracking data, helped us identify ways that students utilize perspective taking, constructing mental representations, building and place-marking, and error checking. These findings provide an initial impetus for further studies of the types of spatial skills that students may exhibit while playing Minecraft. It also introduces questions about how the design of Minecraft activities may promote, or inhibit, the use of certain spatial skills.
How Does Augmented Observation Facilitate Multimodal Representational Thinking? Applying Deep Learning to Decode Complex Student ConstructIn this paper, we demonstrate how machine learning could be used to quickly assess a student’s multimodal representational thinking. Multimodal representational thinking is the complex construct that encodes how students form conceptual, perceptual, graphical, or mathematical symbols in their mind. The augmented reality (AR) technology is adopted to diversify student’s representations. The AR technology utilized a low-cost, high-resolution thermal camera attached to a smartphone which allows students to explore the unseen world of thermodynamics. Ninth-grade students (N= 314) engaged in a prediction–observation–explanation (POE) inquiry cycle scaffolded to leverage the augmented observation provided by the aforementioned device. The objective is to investigate how machine learning could expedite the automated assessment of multimodal representational thinking of heat energy. Two automated text classification methods were adopted to decode different mental representations students used to explain their haptic perception, thermal imaging, and graph data collected in the lab. Since current automated assessment in science education rarely considers multilabel classification, we resorted to the help of the state-of-the-art deep learning technique—bidirectional encoder representations from transformers (BERT). The BERT model classified open-ended responses into appropriate categories with higher precision than the traditional machine learning method. The satisfactory accuracy of deep learning in assigning multiple labels ismore »