Virtual reality (VR) and interactive 3D visualization systems have enhanced educational experiences and environments, particularly in complicated subjects such as anatomy education. VR-based systems surpass the potential limitations of traditional training approaches in facilitating interactive engagement among students. However, research on embodied virtual assistants that leverage generative artificial intelligence (AI) and verbal communication in the anatomy education context is underrepresented. In this work, we introduce a VR environment with a generative AI-embodied virtual assistant to support participants in responding to varying cognitive complexity anatomy questions and enable verbal communication. We assessed the technical efficacy and usability of the proposed environment in a pilot user study with 16 participants. We conducted a within-subject design for virtual assistant configuration (avatar- and screen-based), with two levels of cognitive complexity (knowledge- and analysis-based). The results reveal a significant difference in the scores obtained from knowledge- and analysis-based questions in relation to avatar configuration. Moreover, results provide insights into usability, cognitive task load, and the sense of presence in the proposed virtual assistant configurations. Our environment and results of the pilot study offer potential benefits and future research directions beyond medical education, using generative AI and embodied virtual agents as customized virtual conversational assistants.
more »
« less
A Large-Scale Feasibility and Ethnography Study of Screen-based AR and 3D Visualization Tools for Anatomy Education: Exploring Gender Perspectives in Learning Experience
Anatomy education is an indispensable part of medical training, but traditional methods face challenges like limited resources for dissection in large classes and difficulties understanding 2D anatomy in textbooks. Advanced technologies, such as 3D visualization and augmented reality (AR), are transforming anatomy learning. This paper presents two in-house solutions that use handheld tablets or screen-based AR to visualize 3D anatomy models with informative labels and in-situ visualizations of the muscle anatomy. To assess these tools, a user study of muscle anatomy education involved 236 premedical students in dyadic teams, with results showing that the tablet-based 3D visualization and screen-based AR tools led to significantly higher learning experience scores than traditional textbook. While knowledge retention didn’t differ significantly, ethnographic and gender analysis showed that male students generally reported more positive learning experiences than female students. This study discusses the implications for anatomy and medical education, highlighting the potential of these innovative learning tools considering gender and team dynamics in body painting anatomy learning interventions.
more »
« less
- Award ID(s):
- 2321274
- PAR ID:
- 10555506
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE International Conference on Artificial Intelligence and Virtual Reality
- ISSN:
- 2771-7453
- ISBN:
- 979-8-3503-7202-1
- Page Range / eLocation ID:
- 205 to 214
- Subject(s) / Keyword(s):
- Screen-based Augmented Reality Collaborative Learning Evaluation Methodologies Human-Computer Interface Gender and Ethnography Training Visualization Solid modeling Three-dimensional displays Muscles Painting Biomedical imaging
- Format(s):
- Medium: X
- Location:
- Los Angeles, CA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Advances in computational technology provide opportunities to explore new methods to improve spatial abilities and the understanding of buildings in architecture education. The research employed BIMxAR, a Building Information Modeling-enabled AR educational tool with novel visualization features to support learning and understanding construction systems, materials configuration, and 3D section views of complex building structures. We validated the research through a test case based on a quasi-experimental research design, in which BIMxAR was used as an intervention. Two study groups were employed - non-AR and AR. The learning gain differences within and between the groups were not statistically significant, however, the AR group perceived significantly less workload and higher performance compared to the non-AR group. These findings suggest that the AR version is an easy, useful, and convenient learning tool.more » « less
-
Students often struggle to understand the vector dot product, which is a foundational operation used in mathematics and engineering. To improve undergraduate engineering students’ understanding of the dot product, we developed and tested the effects of an augmented reality (AR) app. The app utilized scaffolding and storyline narration to cover: (1) computation of the angle between vectors, and (2) the projection of a force vector onto a line. Students were randomly assigned to either a treatment group to utilize the AR, or a control group for traditional peer collaboration. Pre/post testing was conducted using a 14-item, 100-point test. 61 pairs of pre/posttest data (ARn = 25, controln = 36) were analyzed using ANCOVA. The 20.9-point improvement in the AR group's mean test scores was significantly larger than the 9.33-point increase in the control group. The effect size (partialη2 = 0.135) was considered medium to large. The Instructional Materials Motivation Survey assessed motivation from 12 students in each group. Motivation of the AR group was 19.3% larger than that of the control. The difference was significant with a large effect size. The results suggest that the 3D visualization and immersive qualities of AR may improve learning of vector operations in STEM disciplines.more » « less
-
Augmented Reality (AR) technology offers the possibility of experiencing virtual images with physical objects and provides high quality hands-on experiences in an engineering lab environment. However, students still need help navigating the educational content in AR environments due to a mismatch problem between computer-generated 3D images and actual physical objects. This limitation could significantly influence their learning processes and workload in AR learning. In addition, a lack of student awareness of their learning process in AR environments could negatively impact their performance improvement. To overcome those challenges, we introduced a virtual instructor in each AR module and asked a metacognitive question to improve students’ metacognitive skills. The results showed that student workload was significantly reduced when a virtual instructor guided students during AR learning. Also, there is a significant correlation between student learning performance and workload when they are overconfident. The outcome of this study will provide knowledge to improve the AR learning environment in higher education settings.more » « less
-
Synopsis Contrast enhanced computed-tomography imaging like diffusible iodine-based contrast-enhanced computed tomography (diceCT) can provide detailed information on muscle architecture important to comparative analyses of functional morphology, using non-destructive approaches. However, manual segmentation of muscle fascicles/fibers is time-consuming, and automated approaches are at times inaccessible and unaffordable. Here, we introduce GoodFibes, an R package for reconstructing muscle architecture in 3D from diceCT image stacks. GoodFibes uses textural analysis of image grayscale values to track straight or curved fiber paths through a muscle image stack. Accessory functions provide quality checking, fiber merging, and 3D visualization and export capabilities. We demonstrate the utility and effectiveness of GoodFibes using two datasets, from an ant and bat diceCT scans. In both cases, GoodFibes provides reliable measurements of mean fiber length compared to traditional approaches, and is as effective as currently available software packages. This open-source, free to use software package will help to improve access to tools in the analysis of muscle fiber anatomy using diceCT scans. The flexible and transparent R-language environment allows other users to build on the functions described here and permits direct statistical analysis of the resulting fiber metrics. We hope that this will increase the number of comparative and evolutionary studies incorporating these rich and functionally important datasets.more » « less
An official website of the United States government

