skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Traveling Bazaar: Portable Support for Face-to-Face Collaboration
For nearly two decades, conversational agents have been used to structure group interactions in online chat-based environments. More recently, this form of dynamic support for collaborative learning has been extended to physical spaces using a combination of multimodal sensing technologies and instrumentation installed within a physical space. This demo extends the reach of dynamic support for collaboration still further through an application of what has recently been termed on-device machine learning, which enables a portable form of multimodal detection to trigger real-time responses.  more » « less
Award ID(s):
2100401
PAR ID:
10437737
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
International Collaboration toward Educational Innovation for All: International Society of the Learning Sciences (ISLS) Annual Meeting 2023
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding abstract concepts in mathematics has continuously presented as a challenge, but the use of directed and spontaneous gestures has shown to support learning and ground higher-order thought. Within embodied learning, gesture has been investigated as part of a multimodal assemblage with speech and movement, centering the body in interaction with the environment. We present a case study of one dyad’s undertaking of a robotic arm activity, targeting learning outcomes in matrix algebra, robotics, and spatial thinking. Through a body syntonicity lens and drawing on video and pre- and post- assessment data, we evaluate learning gains and investigate the multimodal processes contributing to them. We found gesture, speech, and body movement grounded understanding of vector and matrix operations, spatial reasoning, and robotics, as anchored by the physical robotic arm, with implications for the design of learning environments that employ directed gestures. 
    more » « less
  2. Our study is motivated by robotics, where when dealing with robots or other physical systems, we often need to balance competing concerns of relying on complex, multimodal data coming from a variety of sensors with a general lack of large representative datasets. Despite the complexity of modern robotic platforms and the need for multimodal interaction, there has been little research on integrating more than two modalities in a low data regime with the real-world constraint that sensors fail due to obstructions or adverse conditions. In this work, we consider a case in which natural language is used as a retrieval query against objects, represented across multiple modalities, in a physical environment. We introduce extended multimodal alignment (EMMA), a method that learns to select the appropriate object while jointly refining modality-specific embeddings through a geometric (distance-based) loss. In contrast to prior work, our approach is able to incorporate an arbitrary number of views (modalities) of a particular piece of data. We demonstrate the efficacy of our model on a grounded language object retrieval scenario. We show that our model outperforms state-of-the-art baselines when little training data is available. Our code is available at https://github.com/kasraprime/EMMA 
    more » « less
  3. Numerous computer-based collaborative learning environments have been developed to support collaborative problem-solving. Yet, understanding the complexity and dynamic nature of the collaboration process remains a challenge. This is particularly true in open-ended immersive learning environments, where students navigate both physical and virtual spaces, pursuing diverse paths to solve problems. In response, we aimed to unpack these complex collaborative learning processes by investigating 16 groups of college students (n = 77) who utilized an immersive astronomy simulation in their introductory astronomy course. Our specific focus is on joint attention as a multi-level indicator to index collaboration. To examine the interplay between joint attention and other multimodal traces (conceptual discussions and gestures) in students’ interactions with peers and the simulation, we employed a multi-granular approach. This approach encompasses macro-level correlations, meso-level network trends, and micro-level qualitative insights from vignettes to capture nuances at different levels. Distinct multimodal engagement patterns emerged between low- and high-achieving groups, evolving over time across a series of tasks. Our findings contribute to the understanding of the notion of timely joint attention and emphasize the importance of individual exploration during the early stages of collaborative problem-solving, demonstrating its contribution to productive knowledge coconstruction. This research overall provides valuable insights into the complexities of collaboration dynamics within and beyond digital space. The empirical evidence we present in our study lays a strong foundation for developing instructional designs aimed at fostering productive collaboration in immersive learning environments. 
    more » « less
  4. null (Ed.)
    Alzheimer’s Disease (AD) is a chronic neurodegenerative disease that causes severe problems in patients’ thinking, memory, and behavior. An early diagnosis is crucial to prevent AD progression; to this end, many algorithmic approaches have recently been proposed to predict cognitive decline. However, these predictive models often fail to integrate heterogeneous genetic and neuroimaging biomarkers and struggle to handle missing data. In this work we propose a novel objective function and an associated optimization algorithm to identify cognitive decline related to AD. Our approach is designed to incorporate dynamic neuroimaging data by way of a participant-specific augmentation combined with multimodal data integration aligned via a regression task. Our approach, in order to incorporate additional side-information, utilizes structured regularization techniques popularized in recent AD literature. Armed with the fixed-length vector representation learned from the multimodal dynamic and static modalities, conventional machine learning methods can be used to predict the clinical outcomes associated with AD. Our experimental results show that the proposed augmentation model improves the prediction performance on cognitive assessment scores for a collection of popular machine learning algorithms. The results of our approach are interpreted to validate existing genetic and neuroimaging biomarkers that have been shown to be predictive of cognitive decline. 
    more » « less
  5. Previous research has established that embodied modeling (role-playing agents in a system) can support learning about complexity. Separately, research has demonstrated that increasing the multimodal resources available to students can support sensemaking, particularly for students classified as English Learners. This study bridges these two bodies of research to consider how embodied models can strengthen an interconnected system of multimodal models created by a classroom. We explore how iteratively refining embodied modeling activities strengthened connections to other models, real-world phenomena, and multimodal representations. Through design-based research in a sixth grade classroom studying ecosystems, we refined embodied modeling activities initially conceived as supports for computational thinking and modeling. Across three iterative cycles, we illustrate how the conceptual and epistemic relationship between the computational and embodied model shifted, and we analyze how these shifts shaped opportunities for learning and participation by: (1) recognizing each student’s perspectives as critical for making sense of the model, (2) encouraging students to question and modify the “code” for the model, and (3) leveraging multimodal resources, including graphs, gestures, and student-generated language, for meaning-making. Through these shifts, the embodied model became a full-fledged component of the classroom’s model system and created more equitable opportunities for learning and participation. 
    more » « less