Virtual reality (VR) and interactive 3D visualization systems have enhanced educational experiences and environments, particularly in complicated subjects such as anatomy education. VR-based systems surpass the potential limitations of traditional training approaches in facilitating interactive engagement among students. However, research on embodied virtual assistants that leverage generative artificial intelligence (AI) and verbal communication in the anatomy education context is underrepresented. In this work, we introduce a VR environment with a generative AI-embodied virtual assistant to support participants in responding to varying cognitive complexity anatomy questions and enable verbal communication. We assessed the technical efficacy and usability of the proposed environment in a pilot user study with 16 participants. We conducted a within-subject design for virtual assistant configuration (avatar- and screen-based), with two levels of cognitive complexity (knowledge- and analysis-based). The results reveal a significant difference in the scores obtained from knowledge- and analysis-based questions in relation to avatar configuration. Moreover, results provide insights into usability, cognitive task load, and the sense of presence in the proposed virtual assistant configurations. Our environment and results of the pilot study offer potential benefits and future research directions beyond medical education, using generative AI and embodied virtual agents as customized virtual conversational assistants.
more »
« less
Predictive Power of Pupil Dynamics in a Team Based Virtual Reality Task
Assessing and tracking physiological and cognitive states of multiple individuals interacting in virtual environments is of increasing interest to the virtual reality (VR) community. In this paper, we describe a team-based VR task termed the Apollo Distributed Control Task (ADCT), where individuals, via the single independent degree-of-freedom control and limited environmental views, must work together to guide a virtual spacecraft back to Earth. Novel to the experiment is that 1) we simultaneously collect multiple physiological measures including electroencephalography (EEG), pupillometry, speech signals, and individual's actions, 2) we regulate the the difficulty of the task and the type of communication between the teammates. Focusing on the analysis of pupil dynamics, which have been linked to a number of cognitive and physiological processes such as arousal, cognitive control, and working memory, we find that pupil diameter changes are predictive of multiple task-related dimensions, including the difficulty of the task, the role of the team member, and the type of communication.
more »
« less
- Award ID(s):
- 1934968
- PAR ID:
- 10397444
- Date Published:
- Journal Name:
- 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
- Page Range / eLocation ID:
- 592 to 593
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.more » « less
-
This paper investigates a concept called Virtual Ability Simulation (VAS) for people with disability due to Multiple Sclerosis (MS), in a virtual reality (VR) environment. In a VAS people with a disability perform tasks that are made easier in the virtual environment (VE) compared to the real world. We hypothesized that putting people with disabilities in a VAS will increase confidence and enable more efficient task completion. To investigate this hypothesis, we conducted a within-subjects experiment in which participants performed a virtual task called ''kick the ball'' in two different conditions: a no gain condition (i.e., same difficulty as in the real world) and a rotational gain condition (i.e., physically easier than the real world but visually the same). The results from our study suggest that VAS increased participants' confidence which in turn enables them to perceive the difficulty of the same task easier.more » « less
-
Owing to the increasing dynamics and complexity of construction tasks, workers often need to memorize a big amount of engineering information prior to the operations, such as spatial orientations and operational procedures. The working memory development, as a result, is critical to the performance and safety of many construction tasks. This study investigates how the format of engineering information affects human working memory based on a human-subject Virtual Reality (VR) experiment (n=90). A VR model was created to simulate a pipe maintenance task. First, participants were asked to review the task procedures in one of the following formats, including 2D isometric drawings, 3D model, and VR model. After the review session, participants were asked to perform the pipe maintenance task in the virtual environment based on their working memory. The operation accuracy and time were used as the key performance indicators of the working memory development. The experiment results indicate that the 3D and VR groups outperformed the 2D group in both operation accuracy and time, suggesting that a more immersive instruction leads to a better working memory. A further examination finds that the 2D group presented a significantly higher level of intrinsic cognitive load and extraneous cognitive load in the working memory development compared to the 3D and VR groups, indicating that different engineering information formats can cause different levels of cognitive load in working memory development, and ultimately affect the final performance. The findings are expected to inspire the design of intelligent information systems that adapt to the cognitive load of construction workers for improved working memory development.more » « less
-
Purpose Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. Methods We investigate whether virtual reality (VR) cross-training, i.e, exposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. Results Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants. It also has a significant effect on non-surgeon’s mental model of the overall task; novice participants’ estimation of the mental demand and effort required for the surgeon’s task increases after training, while their perception of overall performance decreases, indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. Conclusion Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual’s role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.more » « less
An official website of the United States government

