While robot-assisted minimally invasive surgery (RMIS) procedures afford a variety of benefits over open surgery and manual laparoscopic operations (including increased tool dexterity, reduced patient pain, incision size, trauma and recovery time, and lower infection rates [ 1 ], lack of spatial awareness remains an issue. Typical laparoscopic imaging can lack sufficient depth cues and haptic feedback, if provided, rarely reflects realistic tissue–tool interactions. This work is part of a larger ongoing research effort to reconstruct 3D surfaces using multiple viewpoints in RMIS to increase visual perception. The manual placement and adjustment of multicamera systems in RMIS are nonideal and prone to error [ 2 ], and other autonomous approaches focus on tool tracking and do not consider reconstruction of the surgical scene [ 3 , 4 , 5 ]. The group’s previous work investigated a novel, context-aware autonomous camera positioning method [ 6 ], which incorporated both tool location and scene coverage for multiple camera viewpoint adjustments. In this paper, the authors expand upon this prior work by implementing a streamlined deep reinforcement learning approach between optimal viewpoints calculated using the prior method [ 6 ] which encourages discovery of otherwise unobserved and additional camera viewpoints. Combining the framework and robustness of the previous work with the efficiency and additional viewpoints of the augmentations presented here results in improved performance and scene coverage promising towards real-time implementation. 
                        more » 
                        « less   
                    
                            
                            3D Occupancy Reconstruction in Dynamic and Deforming Surgical Environments
                        
                    
    
            Vision dimensionality during minimally invasive surgery is a critical contributor to patient success. Traditional visualizations of the surgical scene are 2D camera streams that obfuscate depth perception inside the abdominal cavity. A lack of depth in surgical views cause surgeons to miss tissue targets, induce blood loss, and incorrectly assess deformation. 3D sensors, while offering key depth information, are expensive and often incompatible with current sterilization techniques. Furthermore, methods inferring a 3D space from stereoscopic video struggle with the inherent lack of unique features in the biological domain. We present an application of deep learning models that can assess simple binary occupancy from a single camera perspective to recreate the surgical scene in high-fidelity. Our quantitative results (IoU=O.82, log loss=0.346) indicate a strong representational capability for structure in surgical scenes, enabling surgeons to reduce patient injury during minimally invasive surgery. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2101107
- PAR ID:
- 10552162
- Publisher / Repository:
- IEEE 2024 International Symposium on Medical Robotics (ISMR)
- Date Published:
- ISSN:
- 2771-9049
- ISBN:
- 979-8-3503-7711-8
- Page Range / eLocation ID:
- 1 to 7
- Format(s):
- Medium: X
- Location:
- Atlanta, GA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Robot-assisted minimally invasive surgery com- bines the skills and techniques of highly-trained surgeons with the robustness and precision of machines. Several advantages include precision beyond human dexterity alone, greater kinematic degrees of freedom at the surgical tool tip, and possibilities in remote surgical practices through teleoperation. Nevertheless, obtaining accurate force feedback during surgical operations remains a challenging hurdle. Though direct force sensing using tool tip mounted sensors is theoretically possible, it is not amenable to required sterilization procedures. Vision-based force estimation according to real-time analysis of tissue deformation serves as a promising alternative. In this application, along with numerous related research in robot- assisted minimally invasive surgery, segmentation of surgical instruments in endoscopic images is a prerequisite. Thus, a surgical tool segmentation algorithm robust to partial occlusion is proposed using DFT shape matching of robot kinematics shape prior (u) fused with log likelihood mask (Q) in the Opponent color space to generate final mask (U). Implemented on the Raven II surgical robot system, a real-time performance robust to tool tip orientation and up to 6 fps without GPU acceleration is achieved.more » « less
- 
            null (Ed.)Dynamic 3D reconstruction of surgical cavities is essential in a wide range of computer-assisted surgical intervention applications, including but not limited to surgical guidance, pre-operative image registration and vision-based force estimation. According to a survey on vision based 3D reconstruction for abdominal minimally invasive surgery (MIS) [1], real-time 3D reconstruction and tissue deformation recovery remain open challenges to researchers. The main challenges include specular reflections from the wet tissue surface and the highly dynamic nature of abdominal surgical scenes. This work aims to overcome these obstacles by using multiple viewpoint and independently moving RGB cameras to generate an accurate measurement of tissue deformation at the volume of interest (VOI), and proposes a novel efficient camera pairing algorithm. Experimental results validate the proposed camera grouping and pair sequencing, and were evaluated with the Raven-II [2] surgical robot system for tool navigation, the Medtronic Stealth Station s7 surgical navigation system for real- time camera pose monitoring, and the Space Spider white light scanner to derive the ground truth 3D model.more » « less
- 
            Robotic-assisted minimally invasive surgery (MIS) has enabled procedures with increased precision and dexterity, but surgical robots are still open loop and require surgeons to work with a tele-operation console providing only limited visual feedback. In this setting, mechanical failures, software faults, or human errors might lead to adverse events resulting in patient complications or fatalities. We argue that impending adverse events could be detected and mitigated by applying context-specific safety constraints on the motions of the robot. We present a context-aware safety monitoring system which segments a surgical task into subtasks using kinematics data and monitors safety constraints specific to each subtask. To test our hypothesis about context specificity of safety constraints, we analyze recorded demonstrations of dry-lab surgical tasks collected from the JIGSAWS database as well as from experiments we conducted on a Raven II surgical robot. Analysis of the trajectory data shows that each subtask of a given surgical procedure has consistent safety constraints across multiple demonstrations by different subjects. Our preliminary results show that violations of these safety constraints lead to unsafe events, and there is often sufficient time between the constraint violation and the safety-critical event to allow for a corrective action.more » « less
- 
            Abstract Laparoscopic surgery has a notably high learning curve, hindering typical approaches to training. Due to unique challenges that are not present in open surgery (the hinge effect, small field of view (FoV), lack of depth perception, and small workspace), a surgical resident may be delayed in participating in laparoscopic surgery until later in residency. Having a narrow window to complete highly specialized training can lead to graduates feeling under-prepared for solo practice. Additionally, delayed introduction may expose trainees to fewer than 200 laparoscopic cases. Therefore, there is a need for surgical residents to increase both their caseload and training window without compromising patient safety. This project aims to develop and test a proof-of-concept prototype that uses granular jamming technology to controllably vary the force required to move a laparoscopic tool. By increasing tool resistance, the device helps prevents accidental injury to important nearby anatomical structures such as urinary tract, vasculature, and/or bowel. Increasing the safety of laparoscopic surgery would allow residents to begin their training earlier, gaining exposure and confidence. A device to adjust tool resistance has benefits to the experienced surgeon as well – surgeries require continuous tool adjustment and tension, resulting in fatigue. Increasing tool resistance can assist surgeons in situations requiring continuous tension and can also provide safety against sudden movements. This investigational device was prototyped using SolidWorks CAD software, then 3D printed and assessed with a laparoscopic box trainer.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    