skip to main content


Title: Towards a ROS-based Modular Multi-Modality Haptic Feedback System for Robotic Minimally Invasive Surgery Training Assessments
Current commercially available robotic minimally invasive surgery (RMIS) platforms provide no haptic feedback of tool interactions with the surgical environment. As a consequence, novice robotic surgeons must rely exclusively on visual feedback to sense their physical interactions with the surgical environment. This technical limitation can make it challenging and time-consuming to train novice surgeons to proficiency in RMIS. Extensive prior research has demonstrated that incorporating haptic feedback is effective at improving surgical training task performance. However, few studies have investigated the utility of providing feedback of multiple modalities of haptic feedback simultaneously (multi-modality haptic feedback) in this context, and these studies have presented mixed results regarding its efficacy. Furthermore, the inability to generalize and compare these mixed results has limited our ability to understand why they can vary significantly between studies. Therefore, we have developed a generalized, modular multi-modality haptic feedback and data acquisition framework leveraging the real-time data acquisition and streaming capabilities of the Robot Operating System (ROS). In our preliminary study using this system, participants complete a peg transfer task using a da Vinci robot while receiving haptic feedback of applied forces, contact accelerations, or both via custom wrist-worn haptic devices. Results highlight the capability of our system in running systematic comparisons between various single and dual-modality haptic feedback approaches.  more » « less
Award ID(s):
1852155
NSF-PAR ID:
10357399
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
International Symposium on Medical Robotics
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.

     
    more » « less
  2. Robotic-assisted minimally invasive surgery (MIS) has enabled procedures with increased precision and dexterity, but surgical robots are still open loop and require surgeons to work with a tele-operation console providing only limited visual feedback. In this setting, mechanical failures, software faults, or human errors might lead to adverse events resulting in patient complications or fatalities. We argue that impending adverse events could be detected and mitigated by applying context-specific safety constraints on the motions of the robot. We present a context-aware safety monitoring system which segments a surgical task into subtasks using kinematics data and monitors safety constraints specific to each subtask. To test our hypothesis about context specificity of safety constraints, we analyze recorded demonstrations of dry-lab surgical tasks collected from the JIGSAWS database as well as from experiments we conducted on a Raven II surgical robot. Analysis of the trajectory data shows that each subtask of a given surgical procedure has consistent safety constraints across multiple demonstrations by different subjects. Our preliminary results show that violations of these safety constraints lead to unsafe events, and there is often sufficient time between the constraint violation and the safety-critical event to allow for a corrective action. 
    more » « less
  3. This paper describes a framework allowing intraoperative photoacoustic (PA) imaging integrated into minimally invasive surgical systems. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. With PA imaging, a surgical robot can provide intraoperative neurovascular guidance to the operating physician, alerting them of the presence of vital substrate anatomy invisible to the naked eye, preventing complications such as hemorrhage and paralysis. Our proposed framework is designed to work with the da Vinci surgical system: real-time PA images produced by the framework are superimposed on the endoscopic video feed with an augmented reality overlay, thus enabling intuitive three-dimensional localization of critical anatomy. To evaluate the accuracy of the proposed framework, we first conducted experimental studies in a phantom with known geometry, which revealed a volumetric reconstruction error of 1.20 ± 0.71 mm. We also conducted an ex vivo study by embedding blood-filled tubes into chicken breast, demonstrating the successful real-time PA-augmented vessel visualization onto the endoscopic view. These results suggest that the proposed framework could provide anatomical and functional feedback to surgeons and it has the potential to be incorporated into robot-assisted minimally invasive surgical procedures. 
    more » « less
  4. This paper describes a framework allowing intraoperative photoacoustic (PA) imaging integrated into minimally invasive surgical systems. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. With PA imaging, a surgical robot can provide intraoperative neurovascular guidance to the operating physician, alerting them of the presence of vital substrate anatomy invisible to the naked eye, preventing complications such as hemorrhage and paralysis. Our proposed framework is designed to work with the da Vinci surgical system: real-time PA images produced by the framework are superimposed on the endoscopic video feed with an augmented reality overlay, thus enabling intuitive three-dimensional localization of critical anatomy. To evaluate the accuracy of the proposed framework, we first conducted experimental studies in a phantom with known geometry, which revealed a volumetric reconstruction error of 1.20 ± 0.71 mm. We also conducted anex vivostudy by embedding blood-filled tubes into chicken breast, demonstrating the successful real-time PA-augmented vessel visualization onto the endoscopic view. These results suggest that the proposed framework could provide anatomical and functional feedback to surgeons and it has the potential to be incorporated into robot-assisted minimally invasive surgical procedures.

     
    more » « less
  5. Tactile sensing has been increasingly utilized in robot control of unknown objects to infer physical properties and optimize manipulation. However, there is limited understanding about the contribution of different sensory modalities during interactive perception in complex interaction both in robots and in humans. This study investigated the effect of visual and haptic information on humans’ exploratory interactions with a ‘cup of coffee’, an object with nonlinear internal dynamics. Subjects were instructed to rhythmically transport a virtual cup with a rolling ball inside between two targets at a specified frequency, using a robotic interface. The cup and targets were displayed on a screen, and force feedback from the cup-andball dynamics was provided via the robotic manipulandum. Subjects were encouraged to explore and prepare the dynamics by “shaking” the cup-and-ball system to find the best initial conditions prior to the task. Two groups of subjects received the full haptic feedback about the cup-and-ball movement during the task; however, for one group the ball movement was visually occluded. Visual information about the ball movement had two distinctive effects on the performance: it reduced preparation time needed to understand the dynamics and, importantly, it led to simpler, more linear input-output interactions between hand and object. The results highlight how visual and haptic information regarding nonlinear internal dynamics have distinct roles for the interactive perception of complex objects. 
    more » « less