skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stand in surgeon’s shoes: virtual reality cross-training to enhance teamwork in surgery
Purpose Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. Methods We investigate whether virtual reality (VR) cross-training, i.e, exposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. Results Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants. It also has a significant effect on non-surgeon’s mental model of the overall task; novice participants’ estimation of the mental demand and effort required for the surgeon’s task increases after training, while their perception of overall performance decreases, indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. Conclusion Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual’s role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.  more » « less
Award ID(s):
2239077
PAR ID:
10520354
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
International Journal of Computer-assisted Radiology and Surgery
Date Published:
Journal Name:
International Journal of Computer Assisted Radiology and Surgery
Volume:
19
Issue:
6
ISSN:
1861-6429
Page Range / eLocation ID:
1213 to 1222
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education. 
    more » « less
  2. Training for robotic surgery can be challenging due the complexity of the technology, as well as a high demand for the robotic systems that must be primarily used for clinical care. While robotic surgical skills are traditionally trained using the robotic hardware coupled with physical simulated tissue models and test-beds, there has been an increasing interest in using virtual reality simulators. Use of virtual reality (VR) comes with some advantages, such as the ability to record and track metrics associated with learning. However, evidence of skill transfer from virtual environments to physical robotic tasks has yet to be fully demonstrated. In this work, we evaluate the effect of virtual reality pre-training on performance during a standardized robotic dry-lab training curriculum, where trainees perform a set of tasks and are evaluated with a score based on completion time and errors made during the task. Results show that VR pre-training is weakly significant ([Formula: see text]) in reducing the number of repetitions required to achieve proficiency on the robotic task; however, it is not able to significantly improve performance in any robotic tasks. This suggests that important skills are learned during physical training with the surgical robotic system that cannot yet be replaced with VR training. 
    more » « less
  3. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less
  4. Laparoscopic surgery presents practical benefits over traditional open surgery, including reduced risk of infection, discomfort and recovery time for patients. Introducing robot systems into surgical tasks provides additional enhancements, including improved precision, remote operation, and an intelligent software layer capable of filtering aberrant motion and scaling surgical maneuvers. However, the software interface in telesurgery also lends itself to potential adversarial cyber attacks. Such attacks can negatively effect both surgeon motion commands and sensory information relayed to the operator. To combat cyber attacks on the latter, one method to enhance surgeon feedback through multiple sensory pathways is to incorporate reliable, complementary forms of information across different sensory modes. Built-in partial redundancies or inferences between perceptual channels, or perception complementarities, can be used both to detect and recover from compromised operator feedback. In surgery, haptic sensations are extremely useful for surgeons to prevent undue and unwanted tissue damage from excessive tool-tissue force. Direct force sensing is not yet deployable due to sterilization requirements of the operating room. Instead, combinations of other sensing methods may be relied upon, such as noncontact model-based force estimation. This paper presents the design of a surgical simulator software that can be used for vision-based non-contact force sensing to inform the perception complementarity of vision and force feedback for telesurgery. A brief user study is conducted to verify the efficacy of graphical force feedback from vision-based force estimation, and suggests that vision may effectively complement direct force sensing. 
    more » « less
  5. null (Ed.)
    Virtual reality (VR) systems have been increasingly used in recent years in various domains, such as education and training. Presence, which can be described as ‘the sense of being there’ is one of the most important user experience aspects in VR. There are several components, which may affect the level of presence, such as interaction, visual fidelity, and auditory cues. In recent years, a significant effort has been put into increasing the sense of presence in VR. This study focuses on improving user experience in VR by increasing presence through increased interaction fidelity and enhanced illusions. Interaction in real life includes mutual and bidirectional encounters between two or more individuals through shared tangible objects. However, the majority of VR interaction to date has been unidirectional. This research aims to bridge this gap by enabling bidirectional mutual tangible embodied interactions between human users and virtual characters in world-fixed VR through real-virtual shared objects that extend from virtual world into the real world. I hypothesize that the proposed novel interaction will shrink the boundary between the real and virtual worlds (through virtual characters that affect the physical world), increase the seamlessness of the VR system (enhance the illusion) and the fidelity of interaction, and increase the level of presence and social presence, enjoyment and engagement. This paper includes the motivation, design and development details of the proposed novel world-fixed VR system along with future directions. 
    more » « less