Steerable needles are capable of accurately targeting difficult-to-reach clinical sites in the body. By bending around sensitive anatomical structures, steerable needles have the potential to reduce the invasiveness of many medical procedures. However, inserting these needles with curved trajectories increases the risk of tissue damage due to perpendicular forces exerted on the surrounding tissue by the needle’s shaft, potentially resulting in lateral shearing through tissue. Such forces can cause significant tissue damage, negatively affecting patient outcomes. In this work, we derive a tissue and needle force model based on a Cosserat string formulation, which describes the normal forces and frictional forces along the shaft as a function of the planned needle path, friction model and parameters, and tip piercing force. We propose this new force model and associated cost function as a safer and more clinically relevant metric than those currently used in motion planning for steerable needles. We fit and validate our model through physical needle robot experiments in a gel phantom. We use this force model to define a bottleneck cost function for motion planning and evaluate it against the commonly used path-length cost function in hundreds of randomly generated three-dimensional (3D) environments. Plans generated with our force-based cost show a 62% reduction in the peak modeled tissue force with only a 0.07% increase in length on average compared to using the path-length cost in planning. Additionally, we demonstrate planning with our force-based cost function in a lung tumor biopsy scenario from a segmented computed tomography (CT) scan. By directly minimizing the modeled needle-to-tissue force, our method may reduce patient risk and improve medical outcomes from steerable needle interventions.
more »
« less
Deep Learning-Based Photoacoustic Visual Servoing: Using Outputs from Raw Sensor Data as Inputs to a Robot Controller
Tool tip visualization is an essential component of multiple robotic surgical and interventional procedures. In this paper, we introduce a real-time photoacoustic visual servoing system that processes information directly from raw acoustic sensor data, without requiring image formation or segmentation in order to make robot path planning decisions to track and maintain visualization of tool tips. The performance of this novel deep learning-based visual servoing system is compared to that of a visual servoing system which relies on image formation followed by segmentation to make and execute robot path planning decisions. Experiments were conducted with a plastisol phantom, ex vivo tissue, and a needle as the interventional tool. Needle tip tracking performance with the deep learning-based approach outperformed that of the image-based segmentation approach by 67.7% and 55.3% in phantom and ex vivo tissue, respectively. In addition, the deep learning-based system operated within the frame-rate-limiting 10 Hz laser pulse repetition frequency rate, with mean execution times of 75.2 ms and 73.9 ms per acquisition frame with phantom and ex vivo tissue, respectively. These results highlight the benefits of our new approach to integrate deep learning with robotic systems for improved automation and visual servoing of tool tips.
more »
« less
- PAR ID:
- 10300666
- Date Published:
- Journal Name:
- 2021 IEEE International Conference on Robotics and Automation (ICRA)
- Page Range / eLocation ID:
- 14261 to 14267
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Flexible array transducer for photoacoustic-guided interventions: phantom and ex vivo demonstrationsPhotoacoustic imaging has demonstrated recent promise for surgical guidance, enabling visualization of tool tips during surgical and non-surgical interventions. To receive photoacoustic signals, most conventional transducers are rigid, while a flexible array is able to deform and provide complete contact on surfaces with different geometries. In this work, we present photoacoustic images acquired with a flexible array transducer in multiple concave shapes in phantom andex vivobovine liver experiments targeted toward interventional photoacoustic applications. We validate our image reconstruction equations for known sensor geometries with simulated data, and we provide empirical elevation field-of-view, target position, and image quality measurements. The elevation field-of-view was 6.08 mm at a depth of 4 cm and greater than 13 mm at a depth of 5 cm. The target depth agreement with ground truth ranged 98.35-99.69%. The mean lateral and axial target sizes when imaging 600μm-core-diameter optical fibers inserted within the phantoms ranged 0.98-2.14 mm and 1.61-2.24 mm, respectively. The mean ± one standard deviation of lateral and axial target sizes when surrounded by liver tissue were 1.80±0.48 mm and 2.17±0.24 mm, respectively. Contrast, signal-to-noise, and generalized contrast-to-noise ratios ranged 6.92–24.42 dB, 46.50–67.51 dB, and 0.76–1, respectively, within the elevational field-of-view. Results establish the feasibility of implementing photoacoustic-guided surgery with a flexible array transducer.more » « less
-
Photoacoustic imaging–the combination of optics and acoustics to visualize differences in optical absorption – has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.more » « less
-
Abstract Prostate cancer treatment planning is largely dependent upon examination of core-needle biopsies. The microscopic architecture of the prostate glands forms the basis for prognostic grading by pathologists. Interpretation of these convoluted three-dimensional (3D) glandular structures via visual inspection of a limited number of two-dimensional (2D) histology sections is often unreliable, which contributes to the under- and overtreatment of patients. To improve risk assessment and treatment decisions, we have developed a workflow for nondestructive 3D pathology and computational analysis of whole prostate biopsies labeled with a rapid and inexpensive fluorescent analogue of standard hematoxylin and eosin (H&E) staining. This analysis is based on interpretable glandular features and is facilitated by the development of image translation–assisted segmentation in 3D (ITAS3D). ITAS3D is a generalizable deep learning–based strategy that enables tissue microstructures to be volumetrically segmented in an annotation-free and objective (biomarker-based) manner without requiring immunolabeling. As a preliminary demonstration of the translational value of a computational 3D versus a computational 2D pathology approach, we imaged 300 ex vivo biopsies extracted from 50 archived radical prostatectomy specimens, of which, 118 biopsies contained cancer. The 3D glandular features in cancer biopsies were superior to corresponding 2D features for risk stratification of patients with low- to intermediate-risk prostate cancer based on their clinical biochemical recurrence outcomes. The results of this study support the use of computational 3D pathology for guiding the clinical management of prostate cancer. Significance:An end-to-end pipeline for deep learning–assisted computational 3D histology analysis of whole prostate biopsies shows that nondestructive 3D pathology has the potential to enable superior prognostic stratification of patients with prostate cancer.more » « less
-
A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina and perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely un-explored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by mimicking the perception-action feedback loop of an expert surgeon. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in real-life experiments using a silicone eye phantom. We show that the network can reliably navigate a surgical tool to various desired locations within 137 µm accuracy in phantom experiments and 94 µm in simulation, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.more » « less
An official website of the United States government

