skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Development and Evaluation of a Markerless 6 DOF Pose Tracking Method for a Suture Needle from a Robotic Endoscope
Remarkable progress has been made in the field of robot-assisted surgery in recent years, particularly in the area of surgical task automation, though many challenges and opportunities still exist. Among these topics, the detection and tracking of surgical tools play a pivotal role in enabling autonomous systems to plan and execute procedures effectively. For instance, accurate estimation of a needle’s position and posture is essential for surgical systems to grasp the needle and perform suturing tasks autonomously. In this paper, we developed image-based methods for markerless 6 degrees of freedom (DOF) suture needle pose estimation using keypoint detection technique based on Deep Learning and Point-to-point Registration, we also leveraged multi-viewpoint from a robotic endoscope to enhance the accuracy. The data collection and annotation process was automated by utilizing a simulated environment, enabling us to create a dataset with 3446 evenly distributed needle samples across a suturing phantom space for training and to demonstrate more convincing and unbiased performance results. We also investigated the impact of training set size on the keypoint detection accuracy. Our implemented pipeline that takes a single RGB image achieved a median position error of 1.4 mm and a median orientation error of 2.9, while our multi-viewpoint method was able to further reduce the random errors.  more » « less
Award ID(s):
1927275
PAR ID:
10528557
Author(s) / Creator(s):
; ;
Publisher / Repository:
World Scientific
Date Published:
Journal Name:
Journal of Medical Robotics Research
Volume:
08
Issue:
03n04
ISSN:
2424-905X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Advancements in robot-assisted surgery have been rapidly growing since two decades ago. More recently, the automation of robotic surgical tasks has become the focus of research. In this area, the detection and tracking of a surgical tool are crucial for an autonomous system to plan and perform a procedure. For example, knowing the position and posture of a needle is a prerequisite for an automatic suturing system to grasp it and perform suturing tasks. In this paper, we proposed a novel method, based on Deep Learning and Point-to-point Registration, to track the 6 degrees of freedom (DOF) pose of a metal suture needle from a robotic endoscope (an Endoscopic Camera Manipulator from the da Vinci Robotic Surgical Systems), without the help of any marker. The proposed approach was implemented and evaluated in a standard simulated surgical environment provided by the 2021–2022 AccelNet Surgical Robotics Challenge, thus demonstrates the potential to be translated into a real-world scenario. A customized dataset containing 836 images collected from the simulated scene with ground truth of poses and key points information was constructed to train the neural network model. The best pipeline achieved an average position error of 1.76 mm while the average orientation error is 8.55 degrees, and it can run up to 10 Hz on a PC. 
    more » « less
  2. In this work, we develop an open-source surgical simulation environment that includes a realistic model obtained by MRI-scanning a physical phantom, for the purpose of training and evaluating a Learning from Demonstration (LfD) algorithm for autonomous suturing. The LfD algorithm utilizes Dynamic Movement Primitives (DMP) and Locally Weighted Regression (LWR), but focuses on the needle trajectory, rather than the instruments, to obtain better generality with respect to needle grasps. We conduct a user study to collect multiple suturing demonstrations and perform a comprehensive analysis of the ability of the LfD algorithm to generalize from a demonstration at one location in one phantom to different locations in the same phantom and to a different phantom. Our results indicate good generalization, on the order of 91.5%, when learning from more experienced subjects, indicating the need to integrate skill assessment in the future. 
    more » « less
  3. Abstract BackgroundAnalysing kinematic and video data can help identify potentially erroneous motions that lead to sub‐optimal surgeon performance and safety‐critical events in robot‐assisted surgery. MethodsWe develop a rubric for identifying task and gesture‐specific executional and procedural errors and evaluate dry‐lab demonstrations of suturing and needle passing tasks from the JIGSAWS dataset. We characterise erroneous parts of demonstrations by labelling video data, and use distribution similarity analysis and trajectory averaging on kinematic data to identify parameters that distinguish erroneous gestures. ResultsExecutional error frequency varies by task and gesture, and correlates with skill level. Some predominant error modes in each gesture are distinguishable by analysing error‐specific kinematic parameters. Procedural errors could lead to lower performance scores and increased demonstration times but also depend on surgical style. ConclusionsThis study provides insights into context‐dependent errors that can be used to design automated error detection mechanisms and improve training and skill assessment. 
    more » « less
  4. We propose an improved keypoint approach for 6-DoF grasp pose synthesis from RGB-D input. Keypoint-based grasp detection from image input demonstrated promising results in a previous study, where the visual information provided by color imagery compensates for noisy or imprecise depth measurements. However, it relies heavily on accurate keypoint prediction in image space. We devise a new grasp generation network that reduces the dependency on precise keypoint estimation. Given an RGB-D input, the network estimates both the grasp pose and the camera-grasp length scale. Re-design of the keypoint output space mitigates the impact of keypoint prediction noise on Perspective-n-Point (PnP) algorithm solutions. Experiments show that the proposed method outperforms the baseline by a large margin, validating its design. Though trained only on simple synthetic objects, our method demonstrates sim-to-real capacity through competitive results in real-world robot experiments. 
    more » « less
  5. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less