skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Multicamera 3D Reconstruction of Dynamic Surgical Cavities: Camera Grouping and Pair Sequencing
Dynamic 3D reconstruction of surgical cavities is essential in a wide range of computer-assisted surgical intervention applications, including but not limited to surgical guidance, pre-operative image registration and vision-based force estimation. According to a survey on vision based 3D reconstruction for abdominal minimally invasive surgery (MIS) [1], real-time 3D reconstruction and tissue deformation recovery remain open challenges to researchers. The main challenges include specular reflections from the wet tissue surface and the highly dynamic nature of abdominal surgical scenes. This work aims to overcome these obstacles by using multiple viewpoint and independently moving RGB cameras to generate an accurate measurement of tissue deformation at the volume of interest (VOI), and proposes a novel efficient camera pairing algorithm. Experimental results validate the proposed camera grouping and pair sequencing, and were evaluated with the Raven-II [2] surgical robot system for tool navigation, the Medtronic Stealth Station s7 surgical navigation system for real- time camera pose monitoring, and the Space Spider white light scanner to derive the ground truth 3D model.  more » « less
Award ID(s):
1637444
PAR ID:
10209006
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2019 International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 2019
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Vision dimensionality during minimally invasive surgery is a critical contributor to patient success. Traditional visualizations of the surgical scene are 2D camera streams that obfuscate depth perception inside the abdominal cavity. A lack of depth in surgical views cause surgeons to miss tissue targets, induce blood loss, and incorrectly assess deformation. 3D sensors, while offering key depth information, are expensive and often incompatible with current sterilization techniques. Furthermore, methods inferring a 3D space from stereoscopic video struggle with the inherent lack of unique features in the biological domain. We present an application of deep learning models that can assess simple binary occupancy from a single camera perspective to recreate the surgical scene in high-fidelity. Our quantitative results (IoU=O.82, log loss=0.346) indicate a strong representational capability for structure in surgical scenes, enabling surgeons to reduce patient injury during minimally invasive surgery. 
    more » « less
  2. null (Ed.)
    Robot-assisted minimally invasive surgery com- bines the skills and techniques of highly-trained surgeons with the robustness and precision of machines. Several advantages include precision beyond human dexterity alone, greater kinematic degrees of freedom at the surgical tool tip, and possibilities in remote surgical practices through teleoperation. Nevertheless, obtaining accurate force feedback during surgical operations remains a challenging hurdle. Though direct force sensing using tool tip mounted sensors is theoretically possible, it is not amenable to required sterilization procedures. Vision-based force estimation according to real-time analysis of tissue deformation serves as a promising alternative. In this application, along with numerous related research in robot- assisted minimally invasive surgery, segmentation of surgical instruments in endoscopic images is a prerequisite. Thus, a surgical tool segmentation algorithm robust to partial occlusion is proposed using DFT shape matching of robot kinematics shape prior (u) fused with log likelihood mask (Q) in the Opponent color space to generate final mask (U). Implemented on the Raven II surgical robot system, a real-time performance robust to tool tip orientation and up to 6 fps without GPU acceleration is achieved. 
    more » « less
  3. Abstract This study explores the mechanical interactions between surgical needles and soft tissues during procedures like biopsies and brachytherapy. A key challenge is needle tip deflection, which can cause deviation from the intended target. The study aims to develop an analytical model that predicts needle tip deflection during insertion by combining principles from interfacial mechanics and soft tissue deformation. A modified version of the dynamic Euler-Bernoulli beam theory is employed to model needle insertion and predict needle tip deflection. The model’s predictions are then compared to experimental data obtained from needle insertions in real tissues. The research aims to deepen our understanding of needle-tissue interactions and develop a reliable model for predicting needle deflection, ultimately enhancing surgical robots and navigation systems for safer and more precise percutaneous procedures. Pig organs are used as a material data source for a viscoelastic model, simulating needle insertion into kidney-like environments and analyzing organ deformation. The modified Euler-Bernoulli beam theory considers the viscoelastic properties of the tissue. Deflection is then calculated and compared to experimental data, with analytical deflection measurements exhibiting a 5–10% difference compared to experimental results. 
    more » « less
  4. Agaian, Sos S.; DelMarco, Stephen P.; Asari, Vijayan K. (Ed.)
    High accuracy localization and user positioning tracking is critical in improving the quality of augmented reality environments. The biggest challenge facing developers is localizing the user based on visible surroundings. Current solutions rely on the Global Positioning System (GPS) for tracking and orientation. However, GPS receivers have an accuracy of about 10 to 30 meters, which is not accurate enough for augmented reality, which needs precision measured in millimeters or smaller. This paper describes the development and demonstration of a head-worn augmented reality (AR) based vision-aid indoor navigation system, which localizes the user without relying on a GPS signal. Commercially available augmented reality head-set allows individuals to capture the field of vision using the front-facing camera in a real-time manner. Utilizing captured image features as navigation-related landmarks allow localizing the user in the absence of a GPS signal. The proposed method involves three steps: a detailed front-scene camera data is collected and generated for landmark recognition; detecting and locating an individual’s current position using feature matching, and display arrows to indicate areas that require more data collects if needed. Computer simulations indicate that the proposed augmented reality-based vision-aid indoor navigation system can provide precise simultaneous localization and mapping in a GPS-denied environment. Keywords: Augmented-reality, navigation, GPS, HoloLens, vision, positioning system, localization 
    more » « less
  5. This paper proposes an AR-based real-time mobile system for assistive indoor navigation with target segmentation (ARMSAINTS) for both sighted and blind or low-vision (BLV) users to safely explore and navigate in an indoor environment. The solution comprises four major components: graph construction, hybrid modeling, real-time navigation and target segmentation. The system utilizes an automatic graph construction method to generate a graph from a 2D floorplan and the Delaunay triangulation-based localization method to provide precise localization with negligible error. The 3D obstacle detection method integrates the existing capability of AR with a 2D object detector and a semantic target segmentation model to detect and track 3D bounding boxes of obstacles and people to increase BLV safety and understanding when traveling in the indoor environment. The entire system does not require the installation and maintenance of expensive infrastructure, run in real-time on a smartphone, and can easily adapt to environmental changes. 
    more » « less