Training for robotic surgery can be challenging due the complexity of the technology, as well as a high demand for the robotic systems that must be primarily used for clinical care. While robotic surgical skills are traditionally trained using the robotic hardware coupled with physical simulated tissue models and test-beds, there has been an increasing interest in using virtual reality simulators. Use of virtual reality (VR) comes with some advantages, such as the ability to record and track metrics associated with learning. However, evidence of skill transfer from virtual environments to physical robotic tasks has yet to be fully demonstrated. In this work, we evaluate the effect of virtual reality pre-training on performance during a standardized robotic dry-lab training curriculum, where trainees perform a set of tasks and are evaluated with a score based on completion time and errors made during the task. Results show that VR pre-training is weakly significant ([Formula: see text]) in reducing the number of repetitions required to achieve proficiency on the robotic task; however, it is not able to significantly improve performance in any robotic tasks. This suggests that important skills are learned during physical training with the surgical robotic system that cannot yet be replaced with VR training.
more »
« less
This content will become publicly available on December 1, 2026
Impact of extended reality on robot-assisted surgery training: a systematic review and meta-analysis
Abstract Robot-assisted surgeries (RAS) have an extremely steep learning curve. Because of this, surgeons have created many methods to practice RAS outside the operating room. These training models usually include animal or plastic models; however, extended reality simulators have recently been introduced into surgical training programs. This systematic review and meta-analysis was conducted to determine if extended reality simulators can improve the performance of robotic novices and how their performance compares to the conventional training of surgeons on surgical robots. Using the PRISMA 2020 guidelines, a systematic review was performed searching PubMed, Embase, Web of Science, and Cochrane library for studies that compared the performance of robotic novices that received no additional training, trained with extended reality, or trained with inanimate physical simulators (conventional additional training). Articles that gauged performance using GEARS or time to complete measurements were included, while articles that did not make this comparison were excluded. A meta-analysis was performed on the 15 studies found using SPSS to compare the performance outcomes of the novices after training. Robotic novices trained with extended reality simulators showed a statistically significant improvement in time to complete (Cohen’s d = −0.95,p = 0.02) compared to those with no additional training. Extended reality training also showed no statistically significant difference in performance in time to complete (Cohen’s d = 0.65,p = 0.14) or GEARS scores (Cohen’s d = −0.093, p = 0.34) compared to robotic novices trained with conventional models. This meta-analysis seeks to determine if extended reality simulators translate complex skills to surgeons in a low-cost and low-risk environment.
more »
« less
- Award ID(s):
- 2226489
- PAR ID:
- 10629415
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- Journal of Robotic Surgery
- Volume:
- 19
- Issue:
- 1
- ISSN:
- 1863-2491
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Current commercially available robotic minimally invasive surgery (RMIS) platforms provide no haptic feedback of tool interactions with the surgical environment. As a consequence, novice robotic surgeons must rely exclusively on visual feedback to sense their physical interactions with the surgical environment. This technical limitation can make it challenging and time-consuming to train novice surgeons to proficiency in RMIS. Extensive prior research has demonstrated that incorporating haptic feedback is effective at improving surgical training task performance. However, few studies have investigated the utility of providing feedback of multiple modalities of haptic feedback simultaneously (multi-modality haptic feedback) in this context, and these studies have presented mixed results regarding its efficacy. Furthermore, the inability to generalize and compare these mixed results has limited our ability to understand why they can vary significantly between studies. Therefore, we have developed a generalized, modular multi-modality haptic feedback and data acquisition framework leveraging the real-time data acquisition and streaming capabilities of the Robot Operating System (ROS). In our preliminary study using this system, participants complete a peg transfer task using a da Vinci robot while receiving haptic feedback of applied forces, contact accelerations, or both via custom wrist-worn haptic devices. Results highlight the capability of our system in running systematic comparisons between various single and dual-modality haptic feedback approaches.more » « less
-
One of the main challenges individuals face when learning an additional language (L2) is learning its sound system, which includes learning to perceive L2 sounds accurately. High variability phonetic training (HVPT) is one method that has proven highly effective at helping individuals develop robust L2 perceptual categories, and recent meta-analytic work suggests that multi-talker training conditions provide a small but statistically reliable benefit compared to single-talker training. However, no study has compared lower and higher variability multi-talker conditions to determine how the number of talkers affects training outcomes, even though such information can shed additional light on how talker variability affects phonetic training. In this study, we randomly assigned 458 L2 Spanish learners to a two-talker or six-talker HVPT group or to a control group that did not receive HVPT. Training focused on L2 Spanish stops. We tested performance on trained talkers and words as well as several forms of generalization. The experimental groups improved more and demonstrated greater generalization than the control group, but neither experimental group outpaced the other. The number of sessions experimental participants completed moderated learning gains.more » « less
-
Abstract Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.more » « less
-
null (Ed.)Abstract We systematically compared two coding approaches to generate training datasets for machine learning (ML): (i) a holistic approach based on learning progression levels and (ii) a dichotomous, analytic approach of multiple concepts in student reasoning, deconstructed from holistic rubrics. We evaluated four constructed response assessment items for undergraduate physiology, each targeting five levels of a developing flux learning progression in an ion context. Human-coded datasets were used to train two ML models: (i) an 8-classification algorithm ensemble implemented in the Constructed Response Classifier (CRC), and (ii) a single classification algorithm implemented in LightSide Researcher’s Workbench. Human coding agreement on approximately 700 student responses per item was high for both approaches with Cohen’s kappas ranging from 0.75 to 0.87 on holistic scoring and from 0.78 to 0.89 on analytic composite scoring. ML model performance varied across items and rubric type. For two items, training sets from both coding approaches produced similarly accurate ML models, with differences in Cohen’s kappa between machine and human scores of 0.002 and 0.041. For the other items, ML models trained with analytic coded responses and used for a composite score, achieved better performance as compared to using holistic scores for training, with increases in Cohen’s kappa of 0.043 and 0.117. These items used a more complex scenario involving movement of two ions. It may be that analytic coding is beneficial to unpacking this additional complexity.more » « less
An official website of the United States government
