skip to main content


Search for: All records

Creators/Authors contains: "Hannaford, Blake"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Minimally Invasive Surgeries can benefit from having miniaturized sensors on surgical graspers to provide additional information to the surgeons. In this work, a 6 mm ultrasound transducer was added to a surgical grasper, intended to measure acoustic properties of the tissue. However, the ultrasound sensor has a ringing artifact arising from the decaying oscillation of its piezo element, and at short travel distances, the artifact blends with the acoustic echo. Without a method to remove the artifact from the blended signal, this makes it impossible to measure one of the main characteristics of an ultrasound waveform – Time of Flight. In this paper, six filtering methods to clear the artifact from the ultrasound waveform were compared: Bandpass filter, Adaptive Least Mean Squares (LMS) filter, Spectrum Suppression (SPS), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). Following each filtering method, four time of flight extraction methods were compared: Magnitude Threshold, Envelope Peak Detection, Cross-correlation and Short-time Fourier Transform (STFT). The RNN with Cross-correlation method pair was shown to be optimal for this task, performing with the root mean square error of 3.6 %. 
    more » « less
    Free, publicly-accessible full text available July 18, 2024
  2. Telecystoscopy can lower the barrier to access critical urologic diagnostics for patients around the world. A major challenge for robotic control of flexible cystoscopes and intuitive teleoperation is the pose estimation of the scope tip. We propose a novel real-time camera localization method using video recordings from a prior cystoscopy and 3D bladder reconstruction to estimate cystoscope pose within the bladder during follow-up telecystoscopy. We map prior video frames into a low-dimensional space as a dictionary so that a new image can be likewise mapped to efficiently retrieve its nearest neighbor among the dictionary images. The cystoscope pose is then estimated by the correspondence among the new image, its nearest dictionary image, and the prior model from 3D reconstruction. We demonstrate performance of our methods using bladder phantoms with varying fidelity and a servo-controlled cystoscope to simulate the use case of bladder surveillance through telecystoscopy. The servo-controlled cystoscope with 3 degrees of freedom (angulation, roll, and insertion axes) was developed for collecting cystoscope videos from bladder phantoms. Cystoscope videos were acquired in a 2.5D bladder phantom (bladder-shape cross-section plus height) with a panorama of a urothelium attached to the inner surface. Scans of the 2.5D phantom were performed in separate arc trajectories each of which is generated by actuation on the angulation with a fixed roll and insertion length. We further included variance in moving speed, imaging distance and existence of bladder tumors. Cystoscope videos were also acquired in a water-filled 3D silicone bladder phantom with hand-painted vasculature. Scans of the 3D phantom were performed in separate circle trajectories each of which is generated by actuation on the roll axis under a fixed angulation and insertion length. These videos were used to create 3D reconstructions, dictionary sets, and test data sets for evaluating the computational efficiency and accuracy of our proposed method in comparison with a method based on global Scale-Invariant Feature Transform (SIFT) features, named SIFT-only. Our method can retrieve the nearest dictionary image for 94–100% of test frames in under 55[Formula: see text]ms per image, whereas the SIFT-only method can only find the image match for 56–100% of test frames in 6000–40000[Formula: see text]ms per image depending on size of the dictionary set and richness of SIFT features in the images. Our method, with a speed of around 20 Hz for the retrieval stage, is a promising tool for real-time image-based scope localization in robotic cystoscopy when prior cystoscopy images are available. 
    more » « less
  3. Minimally Invasive Surgery lacks tactile feedback that surgeons find useful for finding and diagnosing tissue abnormalities. The goal of this paper is to calibrate sensors of a motorized Smart Grasper surgical instrument to provide accurate force and position measurements. These values serve two functions with the novel calibration hardware. The first is to control the motor of the Grasper to prevent tissue damage. The second is to act as the base upon which future work in multi-modal sensor fusion tissue characterization can be built. Our results show that the Grasper jaw distance is a function of both applied force and motor angle while the force the jaws apply to the tissue can be measured using the internal load cell. All code and data sets used to generate this paper can be found on GitHub at https://github.com/Yana-Sosnovskaya/ Smart Grasper public 
    more » « less
  4. Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation. 
    more » « less
  5. Background: There are well-recognized challenges to delivering specialty health care in rural settings. These challenges are particularly evident for specialized surgical health care due to the lack of trained operators in rural communities. Telerobotic surgery could have a significant impact on the rural-urban health care gap, but thus far, the promise of this method of health care delivery has gone unrealized. With the increasing adoption of telehealth over the past year, along with the maturation of telecommunication and robotic technologies over the past 2 decades, a reappraisal of the opportunities and barriers to widespread implementation of telerobotic surgery is warranted. Here we report the outcome of a rural telerobotic stakeholder workshop to explore modern-day issues critical to the advancement of telerobotic surgical health care. Materials and Methods: We assembled a multidisciplinary stakeholder panel to participate in a 2-day Rural Telerobotic Surgery Stakeholder Workshop. Participants had diverse expertise, including specialty surgeons, technology experts, and representatives of the broader telerobotic health care ecosystem, including economists, lawyers, regulatory consultants, public health advocates, rural hospital administrators, nurses, and payers. The research team reviewed transcripts from the workshop with themes identified and research questions generated based on stakeholder comments and feedback. Results: Stakeholder discussions fell into four general themes, including (1) operating room team interactions, (2) education and training, (3) network and security, and (4) economic issues. The research team then identified several research questions within each of these themes and provided specific research strategies to address these questions. Conclusions: There are still important unanswered questions regarding the implementation and adoption of rural telerobotic surgery. Based on stakeholder feedback, we have developed a research agenda along with suggested strategies to address outstanding research questions. The successful execution of these research opportunities will fill critical gaps in our understanding of how to advance the widespread adoption of rural telerobotic health care. 
    more » « less
  6. While robot-assisted minimally invasive surgery (RMIS) procedures afford a variety of benefits over open surgery and manual laparoscopic operations (including increased tool dexterity, reduced patient pain, incision size, trauma and recovery time, and lower infection rates [ 1 ], lack of spatial awareness remains an issue. Typical laparoscopic imaging can lack sufficient depth cues and haptic feedback, if provided, rarely reflects realistic tissue–tool interactions. This work is part of a larger ongoing research effort to reconstruct 3D surfaces using multiple viewpoints in RMIS to increase visual perception. The manual placement and adjustment of multicamera systems in RMIS are nonideal and prone to error [ 2 ], and other autonomous approaches focus on tool tracking and do not consider reconstruction of the surgical scene [ 3 , 4 , 5 ]. The group’s previous work investigated a novel, context-aware autonomous camera positioning method [ 6 ], which incorporated both tool location and scene coverage for multiple camera viewpoint adjustments. In this paper, the authors expand upon this prior work by implementing a streamlined deep reinforcement learning approach between optimal viewpoints calculated using the prior method [ 6 ] which encourages discovery of otherwise unobserved and additional camera viewpoints. Combining the framework and robustness of the previous work with the efficiency and additional viewpoints of the augmentations presented here results in improved performance and scene coverage promising towards real-time implementation. 
    more » « less
  7. Laparoscopic surgery presents practical benefits over traditional open surgery, including reduced risk of infection, discomfort and recovery time for patients. Introducing robot systems into surgical tasks provides additional enhancements, including improved precision, remote operation, and an intelligent software layer capable of filtering aberrant motion and scaling surgical maneuvers. However, the software interface in telesurgery also lends itself to potential adversarial cyber attacks. Such attacks can negatively effect both surgeon motion commands and sensory information relayed to the operator. To combat cyber attacks on the latter, one method to enhance surgeon feedback through multiple sensory pathways is to incorporate reliable, complementary forms of information across different sensory modes. Built-in partial redundancies or inferences between perceptual channels, or perception complementarities, can be used both to detect and recover from compromised operator feedback. In surgery, haptic sensations are extremely useful for surgeons to prevent undue and unwanted tissue damage from excessive tool-tissue force. Direct force sensing is not yet deployable due to sterilization requirements of the operating room. Instead, combinations of other sensing methods may be relied upon, such as noncontact model-based force estimation. This paper presents the design of a surgical simulator software that can be used for vision-based non-contact force sensing to inform the perception complementarity of vision and force feedback for telesurgery. A brief user study is conducted to verify the efficacy of graphical force feedback from vision-based force estimation, and suggests that vision may effectively complement direct force sensing. 
    more » « less
  8. null (Ed.)
    Surgical robots have been introduced to operating rooms over the past few decades due to their high sensitivity, small size, and remote controllability. The cable-driven nature of many surgical robots allows the systems to be dexterous and lightweight, with diameters as low as 5mm. However, due to the slack and stretch of the cables and the backlash of the gears, inevitable uncertainties are brought into the kinematics calcu- lation [1]. Since the reported end effector position of surgical robots like RAVEN-II [2] is directly calculated using the motor encoder measurements and forward kinematics, it may contain relatively large error up to 10mm, whereas semi-autonomous functions being introduced into abdominal surgeries require position inaccuracy of at most 1mm. To resolve the problem, a cost-effective, real-time and data-driven pipeline for robot end effector position precision estimation is proposed and tested on RAVEN-II. Analysis shows an improved end effector position error of around 1mm RMS traversing through the entire robot workspace without high-resolution motion tracker. The open source code, data sets, videos, and user guide can be found at //github.com/HaonanPeng/RAVEN Neural Network Estimator. 
    more » « less
  9. null (Ed.)
    Surgical robots have been introduced to operating rooms over the past few decades due to their high sensitivity, small size, and remote controllability. The cable-driven nature of many surgical robots allows the systems to be dexterous and lightweight, with diameters as low as 5mm. However, due to the slack and stretch of the cables and the backlash of the gears, inevitable uncertainties are brought into the kinematics calcu- lation [1]. Since the reported end effector position of surgical robots like RAVEN-II [2] is directly calculated using the motor encoder measurements and forward kinematics, it may contain relatively large error up to 10mm, whereas semi-autonomous functions being introduced into abdominal surgeries require position inaccuracy of at most 1mm. To resolve the problem, a cost-effective, real-time and data-driven pipeline for robot end effector position precision estimation is proposed and tested on RAVEN-II. Analysis shows an improved end effector position error of around 1mm RMS traversing through the entire robot workspace without high-resolution motion tracker. The open source code, data sets, videos, and user guide can be found at //github.com/HaonanPeng/RAVEN Neural Network Estimator. 
    more » « less
  10. null (Ed.)
    Robot-assisted minimally invasive surgery has made a substantial impact in operating rooms over the past few decades with their high dexterity, small tool size, and impact on adoption of minimally invasive techniques. In recent years, intelligence and different levels of surgical robot autonomy have emerged thanks to the medical robotics endeavors at numerous academic institutions and leading surgical robot companies. To accelerate interaction within the research community and prevent repeated development, we propose the Collaborative Robotics Toolkit (CRTK), a common API for the RAVEN-II and da Vinci Research Kit (dVRK) - two open surgical robot platforms installed at more than 40 institutions worldwide. CRTK has broadened to include other robots and devices, including simulated robotic systems and industrial robots. This common API is a community software infrastructure for research and education in cutting edge human-robot collaborative areas such as semi-autonomous teleoperation and medical robotics. This paper presents the concepts, design details and the integration of CRTK with physical robot systems and simulation platforms. 
    more » « less