skip to main content

This content will become publicly available on April 19, 2024

Title: Markerless Suture Needle Tracking From A Robotic Endoscope Based On Deep Learning
Advancements in robot-assisted surgery have been rapidly growing since two decades ago. More recently, the automation of robotic surgical tasks has become the focus of research. In this area, the detection and tracking of a surgical tool are crucial for an autonomous system to plan and perform a procedure. For example, knowing the position and posture of a needle is a prerequisite for an automatic suturing system to grasp it and perform suturing tasks. In this paper, we proposed a novel method, based on Deep Learning and Point-to-point Registration, to track the 6 degrees of freedom (DOF) pose of a metal suture needle from a robotic endoscope (an Endoscopic Camera Manipulator from the da Vinci Robotic Surgical Systems), without the help of any marker. The proposed approach was implemented and evaluated in a standard simulated surgical environment provided by the 2021–2022 AccelNet Surgical Robotics Challenge, thus demonstrates the potential to be translated into a real-world scenario. A customized dataset containing 836 images collected from the simulated scene with ground truth of poses and key points information was constructed to train the neural network model. The best pipeline achieved an average position error of 1.76 mm while the average orientation error is 8.55 degrees, and it can run up to 10 Hz on a PC.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2023 International Symposium on Medical Robotics (ISMR)
Page Range / eLocation ID:
1 to 7
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less
  2. Abstract Background

    Analysing kinematic and video data can help identify potentially erroneous motions that lead to sub‐optimal surgeon performance and safety‐critical events in robot‐assisted surgery.


    We develop a rubric for identifying task and gesture‐specific executional and procedural errors and evaluate dry‐lab demonstrations of suturing and needle passing tasks from the JIGSAWS dataset. We characterise erroneous parts of demonstrations by labelling video data, and use distribution similarity analysis and trajectory averaging on kinematic data to identify parameters that distinguish erroneous gestures.


    Executional error frequency varies by task and gesture, and correlates with skill level. Some predominant error modes in each gesture are distinguishable by analysing error‐specific kinematic parameters. Procedural errors could lead to lower performance scores and increased demonstration times but also depend on surgical style.


    This study provides insights into context‐dependent errors that can be used to design automated error detection mechanisms and improve training and skill assessment.

    more » « less
  3. null (Ed.)
    Over the past decade, Robot-Assisted Surgeries (RAS), have become more prevalent in facilitating successful operations. Of the various types of RAS, the domain of collaborative surgery has gained traction in medical research. Prominent examples include providing haptic feedback to sense tissue consistency, and automating sub-tasks during surgery such as cutting or needle hand-off - pulling and reorienting the needle after insertion during suturing. By fragmenting suturing into automated and manual tasks the surgeon could essentially control the process with one hand and also circumvent workspace restrictions imposed by the control interface present at the surgeon's side during the operation. This paper presents an exploration of a discrete reinforcement learning-based approach to automate the needle hand-off task. Users were asked to perform a simple running suture using the da Vinci Research Kit. The user trajectory was learnt by generating a sparse reward function and deriving an optimal policy using Q-learning. Trajectories obtained from three learnt policies were compared to the user defined trajectory. The results showed a root-mean-square error of [0.0044mm, 0.0027mm, 0.0020mm] in ℝ 3 . Additional trajectories from varying initial positions were produced from a single policy to simulate repeated passes of the hand-off task. 
    more » « less
  4. Training for robotic surgery can be challenging due the complexity of the technology, as well as a high demand for the robotic systems that must be primarily used for clinical care. While robotic surgical skills are traditionally trained using the robotic hardware coupled with physical simulated tissue models and test-beds, there has been an increasing interest in using virtual reality simulators. Use of virtual reality (VR) comes with some advantages, such as the ability to record and track metrics associated with learning. However, evidence of skill transfer from virtual environments to physical robotic tasks has yet to be fully demonstrated. In this work, we evaluate the effect of virtual reality pre-training on performance during a standardized robotic dry-lab training curriculum, where trainees perform a set of tasks and are evaluated with a score based on completion time and errors made during the task. Results show that VR pre-training is weakly significant ([Formula: see text]) in reducing the number of repetitions required to achieve proficiency on the robotic task; however, it is not able to significantly improve performance in any robotic tasks. This suggests that important skills are learned during physical training with the surgical robotic system that cannot yet be replaced with VR training. 
    more » « less
  5. In a minimally invasive percutaneous procedure like biopsy, brachytherapy, and tissue ablation, the inner soft tissue is accessed through surgical needle-puncture of the skin. This process reduces tissue damage and risk of infection and improves patient recovery time. However, its effectiveness depends on the needle’s ability to travel on a curved path, avoid obstacles, and maintain high targeting accuracy. Conventional needles are passive and have limited steerability and trajectory correction capability. This has motivated researchers to develop actuation mechanisms to create active needles. In this study, an innovative active steerable needle with a single shape memory alloy (SMA) wire actuator is designed, fabricated, and tested for maneuver. A closed-loop Proportional Integral Derivative (PID) controller with position feedback is developed to control needle tip deflection in air and tissue-mimicking gels. The needle tip is deflected up to 5.75 mm in the air medium. In tissue-mimicking gel, it is deflected up to 15 mm in a predefined trajectory during a 100 mm insertion depth. Our results show that needle tip deflection control has an average root mean square error (RMSE) of 0.72 mm in the air and 1.26 mm in the tissue-mimicking gel. The trajectory tracking performance of the designed SMA actuated needle and its control system show the effectiveness of the active needle in the percutaneous procedures. Future work includes testing the needle’s performance in the biological tissues. 
    more » « less