skip to main content


Search for: All records

Creators/Authors contains: "Jiang, Yiwei"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Advancements in robot-assisted surgery have been rapidly growing since two decades ago. More recently, the automation of robotic surgical tasks has become the focus of research. In this area, the detection and tracking of a surgical tool are crucial for an autonomous system to plan and perform a procedure. For example, knowing the position and posture of a needle is a prerequisite for an automatic suturing system to grasp it and perform suturing tasks. In this paper, we proposed a novel method, based on Deep Learning and Point-to-point Registration, to track the 6 degrees of freedom (DOF) pose of a metal suture needle from a robotic endoscope (an Endoscopic Camera Manipulator from the da Vinci Robotic Surgical Systems), without the help of any marker. The proposed approach was implemented and evaluated in a standard simulated surgical environment provided by the 2021–2022 AccelNet Surgical Robotics Challenge, thus demonstrates the potential to be translated into a real-world scenario. A customized dataset containing 836 images collected from the simulated scene with ground truth of poses and key points information was constructed to train the neural network model. The best pipeline achieved an average position error of 1.76 mm while the average orientation error is 8.55 degrees, and it can run up to 10 Hz on a PC. 
    more » « less
    Free, publicly-accessible full text available April 19, 2024
  2. This paper describes a framework allowing intraoperative photoacoustic (PA) imaging integrated into minimally invasive surgical systems. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. With PA imaging, a surgical robot can provide intraoperative neurovascular guidance to the operating physician, alerting them of the presence of vital substrate anatomy invisible to the naked eye, preventing complications such as hemorrhage and paralysis. Our proposed framework is designed to work with the da Vinci surgical system: real-time PA images produced by the framework are superimposed on the endoscopic video feed with an augmented reality overlay, thus enabling intuitive three-dimensional localization of critical anatomy. To evaluate the accuracy of the proposed framework, we first conducted experimental studies in a phantom with known geometry, which revealed a volumetric reconstruction error of 1.20 ± 0.71 mm. We also conducted anex vivostudy by embedding blood-filled tubes into chicken breast, demonstrating the successful real-time PA-augmented vessel visualization onto the endoscopic view. These results suggest that the proposed framework could provide anatomical and functional feedback to surgeons and it has the potential to be incorporated into robot-assisted minimally invasive surgical procedures.

     
    more » « less
  3. This paper describes a framework allowing intraoperative photoacoustic (PA) imaging integrated into minimally invasive surgical systems. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. With PA imaging, a surgical robot can provide intraoperative neurovascular guidance to the operating physician, alerting them of the presence of vital substrate anatomy invisible to the naked eye, preventing complications such as hemorrhage and paralysis. Our proposed framework is designed to work with the da Vinci surgical system: real-time PA images produced by the framework are superimposed on the endoscopic video feed with an augmented reality overlay, thus enabling intuitive three-dimensional localization of critical anatomy. To evaluate the accuracy of the proposed framework, we first conducted experimental studies in a phantom with known geometry, which revealed a volumetric reconstruction error of 1.20 ± 0.71 mm. We also conducted an ex vivo study by embedding blood-filled tubes into chicken breast, demonstrating the successful real-time PA-augmented vessel visualization onto the endoscopic view. These results suggest that the proposed framework could provide anatomical and functional feedback to surgeons and it has the potential to be incorporated into robot-assisted minimally invasive surgical procedures. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  4. null (Ed.)