skip to main content

Title: Evaluation of Interventional Planning Software Features for MR-guided Transrectal Prostate Biopsies
This work presents an interventional planning software to be used in conjunction with a robotic manipulator to perform transrectal MR guided prostate biopsies. The interventional software was designed taking in consideration a generic manipulator used under the two modes of operation: side-firing and end-firing of the biopsy needle. Studies were conducted with urologists using the software to plan virtual biopsies. The results show features of software relevant for operating efficiently under the two modes of operation.
Authors:
; ; ; ; ; ; ; ; ; ; ;
Award ID(s):
1646566
Publication Date:
NSF-PAR ID:
10223621
Journal Name:
IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE)
Page Range or eLocation-ID:
951 to 954
Sponsoring Org:
National Science Foundation
More Like this
  1. Prostate biopsy is considered as a definitive way for diagnosing prostate malignancies. Urologists are currently moving towards MR-guided prostate biopsies over conventional transrectal ultrasound-guided biopsies for prostate cancer detection. Recently, robotic systems have started to emerge as an assistance tool for urologists to perform MR-guided prostate biopsies. However, these robotic assistance systems are designed for a specific clinical environment and cannot be adapted to modifications or changes applied to the clinical setting and/or workflow. This work presents the preliminary design of a cable-driven manipulator developed to be used in both MR scanners and MR-ultrasound fusion systems. The proposed manipulator design and functionality are evaluated on a simulated virtual environment. The simulation is created on an in-house developed interventional planning software to evaluate the ergonomics and usability. The results show that urologists can benefit from the proposed design of the manipulator and planning software to accurately perform biopsies of targeted areas in the prostate.
  2. The emerging potential of augmented reality (AR) to improve 3D medical image visualization for diagnosis, by immersing the user into 3D morphology is further enhanced with the advent of wireless head-mounted displays (HMD). Such information-immersive capabilities may also enhance planning and visualization of interventional procedures. To this end, we introduce a computational platform to generate an augmented reality holographic scene that fuses pre-operative magnetic resonance imaging (MRI) sets, segmented anatomical structures, and an actuated model of an interventional robot for performing MRI-guided and robot-assisted interventions. The interface enables the operator to manipulate the presented images and rendered structures using voice and gestures, as well as to robot control. The software uses forbidden-region virtual fixtures that alerts the operator of collisions with vital structures. The platform was tested with a HoloLens HMD in silico. To address the limited computational power of the HMD, we deployed the platform on a desktop PC with two-way communication to the HMD. Operation studies demonstrated the functionality and underscored the importance of interface customization to fit a particular operator and/or procedure, as well as the need for on-site studies to assess its merit in the clinical realm.
  3. The emerging potential of augmented reality (AR) to improve 3D medical image visualization for diagnosis, by immersing the user into 3D morphology is further enhanced with the advent of wireless head-mounted displays (HMD). Such information-immersive capabilities may also enhance planning and visualization of interventional procedures. To this end, we introduce a computational platform to generate an augmented reality holographic scene that fuses pre-operative magnetic resonance imaging (MRI) sets, segmented anatomical structures, and an actuated model of an interventional robot for performing MRI-guided and robot-assisted interventions. The interface enables the operator to manipulate the presented images and rendered structures using voice and gestures, as well as to robot control. The software uses forbidden-region virtual fixtures that alerts the operator of collisions with vital structures. The platform was tested with a HoloLens HMD in silico. To address the limited computational power of the HMD, we deployed the platform on a desktop PC with two-way communication to the HMD. Operation studies demonstrated the functionality and underscored the importance of interface customization to fit a particular operator and/or procedure, as well as the need for on-site studies to assess its merit in the clinical realm. Index Terms—augmented reality, robot-assistance, imageguided interventions.
  4. This work presents a platform that integrates a customized MRI data acquisition scheme with reconstruction and three-dimensional (3D) visualization modules along with a module for controlling an MRI-compatible robotic device to facilitate the performance of robot-assisted, MRI-guided interventional procedures. Using dynamically-acquired MRI data, the computational framework of the platform generates and updates a 3D model representing the area of the procedure (AoP). To image structures of interest in the AoP that do not reside inside the same or parallel slices, the MRI acquisition scheme was modified to collect a multi-slice set of intraoblique to each other slices; which are termed composing slices. Moreover, this approach interleaves the collection of the composing slices so the same k-space segments of all slices are collected during similar time instances. This time matching of the k-space segments results in spatial matching of the imaged objects in the individual composing slices. The composing slices were used to generate and update the 3D model of the AoP. The MRI acquisition scheme was evaluated with computer simulations and experimental studies. Computer simulations demonstrated that k-space segmentation and time-matched interleaved acquisition of these segments provide spatial matching of the structures imaged with composing slices. Experimental studies used themore »platform to image the maneuvering of an MRI-compatible manipulator that carried tubing filled with MRI contrast agent. In vivo experimental studies to image the abdomen and contrast enhanced heart on free-breathing subjects without cardiac triggering demonstrated spatial matching of imaged anatomies in the composing planes. The described interventional MRI framework could assist in performing real-time MRI-guided interventions.« less
  5. Shared autonomy provides a framework where a human and an automated system, such as a robot, jointly control the system’s behavior, enabling an effective solution for various applications, including human-robot interaction and remote operation of a semi-autonomous system. However, a challenging problem in shared autonomy is safety because the human input may be unknown and unpredictable, which affects the robot’s safety constraints. If the human input is a force applied through physical contact with the robot, it also alters the robot’s behavior to maintain safety. We address the safety issue of shared autonomy in real-time applications by proposing a two-layer control framework. In the first layer, we use the history of human input measurements to infer what the human wants the robot to do and define the robot’s safety constraints according to that inference. In the second layer, we formulate a rapidly-exploring random tree of barrier pairs, with each barrier pair composed of a barrier function and a controller. Using the controllers in these barrier pairs, the robot is able to maintain its safe operation under the intervention from the human input. This proposed control framework allows the robot to assist the human while preventing them from encountering safety issues.more »We demonstrate the proposed control framework on a simulation of a two-linkage manipulator robot.« less