The immersion of virtual reality (VR) can impact user perceptions in numerous forms, even racial bias and embodied experiences. These effects are often limited to head-mounted displays (HMDs) and other immersive technologies that may not be inclusive to the general population. This paper investigates racial bias and embodiment on a less immersive but more accessible medium: desktop VR. A population of participants (n = 158) participated in a desktop simulation where they embodied a virtual avatar and interacted with virtual humans to determine if desktop embodiment is induced and if there is a resulting effect on racial bias. Our results indicate that desktop embodiment can be induced at low levels, as measured by an embodiment questionnaire. Furthermore, one’s implicit bias may actually influence embodiment, and the experience and perceptions of a desktop VR simulation can be improved through embodied avatars. We discuss these findings and their implications in the context of stereotype activation and existing literature in embodiment.
more »
« less
Towards Anatomy Education with Generative AI-based Virtual Assistants in Immersive Virtual Reality Environments
Virtual reality (VR) and interactive 3D visualization systems have enhanced educational experiences and environments, particularly in complicated subjects such as anatomy education. VR-based systems surpass the potential limitations of traditional training approaches in facilitating interactive engagement among students. However, research on embodied virtual assistants that leverage generative artificial intelligence (AI) and verbal communication in the anatomy education context is underrepresented. In this work, we introduce a VR environment with a generative AI-embodied virtual assistant to support participants in responding to varying cognitive complexity anatomy questions and enable verbal communication. We assessed the technical efficacy and usability of the proposed environment in a pilot user study with 16 participants. We conducted a within-subject design for virtual assistant configuration (avatar- and screen-based), with two levels of cognitive complexity (knowledge- and analysis-based). The results reveal a significant difference in the scores obtained from knowledge- and analysis-based questions in relation to avatar configuration. Moreover, results provide insights into usability, cognitive task load, and the sense of presence in the proposed virtual assistant configurations. Our environment and results of the pilot study offer potential benefits and future research directions beyond medical education, using generative AI and embodied virtual agents as customized virtual conversational assistants.
more »
« less
- PAR ID:
- 10555120
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE International Conference on Artificial Intelligence and Virtual Reality
- ISSN:
- 2771-7453
- ISBN:
- 979-8-3503-7202-1
- Page Range / eLocation ID:
- 21 to 30
- Subject(s) / Keyword(s):
- Visualization Generative AI Virtual assistants Avatars Complexity theory Usability Task analysis virtual reality human-computer interaction embodied virtual assistants anatomy education
- Format(s):
- Medium: X
- Location:
- Los Angeles, CA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the Big Head technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues.more » « less
-
The rapid adoption of generative AI in software development has impacted the industry, yet its efects on developers with visual impairments remain largely unexplored. To address this gap, we used an Activity Theory framework to examine how developers with visual impairments interact with AI coding assistants. For this purpose, we conducted a study where developers who are visually impaired completed a series of programming tasks using a generative AI coding assistant. We uncovered that, while participants found the AI assistant benefcial and reported signifcant advantages, they also highlighted accessibility challenges. Specifcally, the AI coding assistant often exacerbated existing accessibility barriers and introduced new challenges. For example, it overwhelmed users with an excessive number of suggestions, leading developers who are visually impaired to express a desire for “AI timeouts.” Additionally, the generative AI coding assistant made it more difcult for developers to switch contexts between the AI-generated content and their own code. Despite these challenges, participants were optimistic about the potential of AI coding assistants to transform the coding experience for developers with visual impairments. Our fndings emphasize the need to apply activity-centered design principles to generative AI assistants, ensuring they better align with user behaviors and address specifc accessibility needs. This approach can enable the assistants to provide more intuitive, inclusive, and efective experiences, while also contributing to the broader goal of enhancing accessibility in software developmentmore » « less
-
Immersive environments enable users to engage in embodied interaction, enhancing the sensemaking processes involved in completing tasks such as immersive analytics. Previous comparative studies on immersive analytics using augmented and virtual realities have revealed that users employ different strategies for data interpretation and text-based analytics depending on the environment. Our study seeks to investigate how augmented and virtual reality influences sensemaking processes in quantitative immersive analytics. Our results, derived from a diverse group of participants, indicate that users demonstrate comparable performance in both environments. However, it was observed that users exhibit a higher tolerance for cognitive load in VR and travel further in AR. Based on our findings, we recommend providing users with the option to switch between AR and VR, thereby enabling them to select an environment that aligns with their preferences and task requirements.more » « less
-
Virtual reality (VR) has been widely used for education and affords embodied learning experiences. Here we describe: Scale Worlds (SW), an immersive virtual environment to allow users to shrink or grow by powers of ten (10X) and experience entities from molecular to astronomical levels; and students’ impressions and outcomes from experiencing SW in a CAVE (Figure 1) during experiential summer outreach sessions. Data collected from post-visit surveys of 69 students, and field observations, revealed that VR technologies: enabled interactive learning experiences; encouraged active engagement and discussions among participating students; enhanced the understanding of size and scale; and increased interest in STEM careers.more » « less
An official website of the United States government

