skip to main content

Title: Evaluation of Pre-Training with the da Vinci Skills Simulator on Motor Skill Acquisition in a Surgical Robotics Curriculum
Training for robotic surgery can be challenging due the complexity of the technology, as well as a high demand for the robotic systems that must be primarily used for clinical care. While robotic surgical skills are traditionally trained using the robotic hardware coupled with physical simulated tissue models and test-beds, there has been an increasing interest in using virtual reality simulators. Use of virtual reality (VR) comes with some advantages, such as the ability to record and track metrics associated with learning. However, evidence of skill transfer from virtual environments to physical robotic tasks has yet to be fully demonstrated. In this work, we evaluate the effect of virtual reality pre-training on performance during a standardized robotic dry-lab training curriculum, where trainees perform a set of tasks and are evaluated with a score based on completion time and errors made during the task. Results show that VR pre-training is weakly significant ([Formula: see text]) in reducing the number of repetitions required to achieve proficiency on the robotic task; however, it is not able to significantly improve performance in any robotic tasks. This suggests that important skills are learned during physical training with the surgical robotic system that cannot yet be more » replaced with VR training. « less
Authors:
; ; ; ; ;
Award ID(s):
2109635 2102250
Publication Date:
NSF-PAR ID:
10377863
Journal Name:
Journal of Medical Robotics Research
Volume:
06
Issue:
03n04
ISSN:
2424-905X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of thatmore »same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.

    « less
  2. Spatial reasoning skills have been linked to success in STEM and are considered an important part of geoscience problem solving. Most agree that these are a group of skills rather than a single ability, though there is no agreement on the full list of constituent skills. Few studies have attempted to isolate specific spatial skills for deliberate training. We conducted an experiment to isolate and train the skill of recognizing horizontal (a crucial component in measuring the orientation of planes) using a dedicated Virtual Reality (VR) module. We recruited 21 undergraduate students from natural science and social science majors for the study, which consisted of a pretest, 15-minute training, and posttest. The pre- and posttests consisted of a short multiple choice vocabulary quiz, 5 hand-drawn and 5 multiple choice Water Level Task (WLT) questions, and the Vandenberg and Kuse Mental Rotation Task (MRT). Participants were sorted based on pre-test Water Level Task scores, only those with scores <80% were placed in an intervention group and randomly assigned to training, either in VR (experimental) or on paper (standard), of about 15 minutes. The high-scoring participants received no training (comparison). All three groups of participants completed a posttest after the training (ifmore »any). After removing three participants who did not return for the posttest session, we had 18 participants in total: 6 in VR, 7 in the comparison group, and 5 in the standard group. Repeated measures ANOVA of the pre to post hand-drawn WLT scores shows at least one group is different (p=.002) and Tukey’s Post-Hoc analysis indicates that the VR group improved significantly more that the high-scoring comparison group (Mean Difference = -1.857, p = .001) and the standard group (Mean Difference = -1.200, p = .049). While any significant result is encouraging, a major limitation of this study is the small sample size and unequal variances on both the pretest (Levene’s HOV test, F = 7.50, p = .006) and posttest (F = 13.53, p < .001), despite random assignment. More trials are needed to demonstrate reproducibility. While more tests are needed, this preliminary study shows the potential benefit of VR in training spatial reasoning skills.« less
  3. An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks.more »In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery.« less
  4. Advancements in Artificial Intelligence (AI), Information Technology, Augmented Reality (AR) and Virtual Reality (VR), and Robotic Automation is transforming jobs in the Architecture, Engineering and Construction (AEC) industries. However, it is also expected that these technologies will lead to job displacement, alter skill profiles for existing jobs, and change how people work. Therefore, preparing the workforce for an economy defined by these technologies is imperative. This ongoing research focuses on developing an immersive learning training curriculum to prepare the future workforce of the building industry. In this paper we are demonstrating a prototype of a mobile AR application to deliver lessons for training in robotic automation for construction industry workers. The application allows a user to interact with a virtual robot manipulator to learn its basic operations. The goal is to evaluate the effectiveness of the AR application by gauging participants' performance using pre and post surveys.
  5. Background Sustained engagement is essential for the success of telerehabilitation programs. However, patients’ lack of motivation and adherence could undermine these goals. To overcome this challenge, physical exercises have often been gamified. Building on the advantages of serious games, we propose a citizen science–based approach in which patients perform scientific tasks by using interactive interfaces and help advance scientific causes of their choice. This approach capitalizes on human intellect and benevolence while promoting learning. To further enhance engagement, we propose performing citizen science activities in immersive media, such as virtual reality (VR). Objective This study aims to present a novel methodology to facilitate the remote identification and classification of human movements for the automatic assessment of motor performance in telerehabilitation. The data-driven approach is presented in the context of a citizen science software dedicated to bimanual training in VR. Specifically, users interact with the interface and make contributions to an environmental citizen science project while moving both arms in concert. Methods In all, 9 healthy individuals interacted with the citizen science software by using a commercial VR gaming device. The software included a calibration phase to evaluate the users’ range of motion along the 3 anatomical planes of motion andmore »to adapt the sensitivity of the software’s response to their movements. During calibration, the time series of the users’ movements were recorded by the sensors embedded in the device. We performed principal component analysis to identify salient features of movements and then applied a bagged trees ensemble classifier to classify the movements. Results The classification achieved high performance, reaching 99.9% accuracy. Among the movements, elbow flexion was the most accurately classified movement (99.2%), and horizontal shoulder abduction to the right side of the body was the most misclassified movement (98.8%). Conclusions Coordinated bimanual movements in VR can be classified with high accuracy. Our findings lay the foundation for the development of motion analysis algorithms in VR-mediated telerehabilitation.« less