skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Exploring an Affective and Responsive Virtual Environment to Improve Remote Learning
Online classes are typically conducted by using video conferencing software such as Zoom, Microsoft Teams, and Google Meet. Research has identified drawbacks of online learning, such as “Zoom fatigue”, characterized by distractions and lack of engagement. This study presents the CUNY Affective and Responsive Virtual Environment (CARVE) Hub, a novel virtual reality hub that uses a facial emotion classification model to generate emojis for affective and informal responsive interaction in a 3D virtual classroom setting. A web-based machine learning model is employed for facial emotion classification, enabling students to communicate four basic emotions live through automated web camera capture in a virtual classroom without activating their cameras. The experiment is conducted in undergraduate classes on both Zoom and CARVE, and the results of a survey indicate that students have a positive perception of interactions in the proposed virtual classroom compared with Zoom. Correlations between automated emojis and interactions are also observed. This study discusses potential explanations for the improved interactions, including a decrease in pressure on students when they are not showing faces. In addition, video panels in traditional remote classrooms may be useful for communication but not for interaction. Students favor features in virtual reality, such as spatial audio and the ability to move around, with collaboration being identified as the most helpful feature.  more » « less
Award ID(s):
1827505 2131186 1737533
PAR ID:
10440665
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Virtual Worlds
Volume:
2
Issue:
1
ISSN:
2813-2084
Page Range / eLocation ID:
53 to 74
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The COVID-19 pandemic led the majority of educational institutions to rapidly shift to primarily conducting courses through online, remote delivery. Across different institutions, the tools used for synchronous online course delivery varied. They included traditional video conferencing tools like Zoom, Google Meet, and WebEx as well as non-traditional tools like Gather.Town, Gatherly, and YoTribe. The main distinguishing characteristic of these nontraditional tools is their utilization of 2-D maps to create virtual meeting spaces that mimic real-world spaces. In this work, we aim to explore how such tools are perceived by students in the context of learning. Our intuition is that utilizing a tool that features a 2-D virtual space that resembles a real world classroom has underlying benefits compared to the more traditional video conferencing tools. The results of our study indicate that students' perception of using a 2-D virtual classroom improved their interaction, collaboration and overall satisfaction with an online learning experience. 
    more » « less
  2. Agents must monitor their partners' affective states continuously in order to understand and engage in social interactions. However, methods for evaluating affect recognition do not account for changes in classification performance that may occur during occlusions or transitions between affective states. This paper addresses temporal patterns in affect classification performance in the context of an infant-robot interaction, where infants’ affective states contribute to their ability to participate in a therapeutic leg movement activity. To support robustness to facial occlusions in video recordings, we trained infant affect recognition classifiers using both facial and body features. Next, we conducted an in-depth analysis of our best-performing models to evaluate how performance changed over time as the models encountered missing data and changing infant affect. During time windows when features were extracted with high confidence, a unimodal model trained on facial features achieved the same optimal performance as multimodal models trained on both facial and body features. However, multimodal models outperformed unimodal models when evaluated on the entire dataset. Additionally, model performance was weakest when predicting an affective state transition and improved after multiple predictions of the same affective state. These findings emphasize the benefits of incorporating body features in continuous affect recognition for infants. Our work highlights the importance of evaluating variability in model performance both over time and in the presence of missing data when applying affect recognition to social interactions. 
    more » « less
  3. Endowing automated agents with the ability to provide support, entertainment and interaction with human beings requires sensing of the users’ affective state. These affective states are impacted by a combination of emotion inducers, current psychological state, and various conversational factors. Although emotion classification in both singular and dyadic settings is an established area, the effects of these additional factors on the production and perception of emotion is understudied. This paper presents a new dataset, Multimodal Stressed Emotion (MuSE), to study the multimodal interplay between the presence of stress and expressions of affect. We describe the data collection protocol, the possible areas of use, and the annotations for the emotional content of the recordings. The paper also presents several baselines to measure the performance of multimodal features for emotion and stress classification. 
    more » « less
  4. Virtual reality offers vast possibilities to enhance the conventional approach for delivering engineering education. The introduction of virtual reality technology into teaching can improve the undergraduate mechanical engineering curriculum by supplementing the traditional learning experience with outside-the-classroom materials. The Center for Aviation and Automotive Technological Education using Virtual E-Schools (CA2VES), in collaboration with the Clemson University Center for Workforce Development (CUCWD), has developed a comprehensive virtual reality-based learning system. The available e-learning materials include eBooks, mini-video lectures, three-dimensional virtual reality technologies, and online assessments. Select VR-based materials were introduced to students in a sophomore level mechanical engineering laboratory course via fourteen online course modules during a four-semester period. To evaluate the material, a comparison of student performance with and without the material, along with instructor feedback, was completed. Feedback from the instructor and the teaching assistant revealed that the material was effective in improving the laboratory safety and boosted student’s confidence in handling engineering tools. 
    more » « less
  5. Emotion recognition in social situations is a complex task that requires integrating information from both facial expressions and the situational context. While traditional approaches to automatic emotion recognition have focused on decontextualized signals, recent research emphasizes the importance of context in shaping emotion perceptions. This paper contributes to the emerging field of context-based emotion recognition by leveraging psychological theories of human emotion perception to inform the design of automated methods. We propose an approach that combines emotion recognition methods with Bayesian Cue Integration (BCI) to integrate emotion inferences from decontextualized facial expressions and contextual knowledge inferred via Large-language Models. We test this approach in the context of interpreting facial expressions during a social task, the prisoner’s dilemma. Our results provide clear support for BCI across a range of automatic emotion recognition methods. The best automated method achieved results comparable to human observers, suggesting the potential for this approach to advance the field of affective computing. 
    more » « less