skip to main content


Search for: All records

Award ID contains: 1751522

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Photoacoustic imaging has demonstrated recent promise for surgical guidance, enabling visualization of tool tips during surgical and non-surgical interventions. To receive photoacoustic signals, most conventional transducers are rigid, while a flexible array is able to deform and provide complete contact on surfaces with different geometries. In this work, we present photoacoustic images acquired with a flexible array transducer in multiple concave shapes in phantom andex vivobovine liver experiments targeted toward interventional photoacoustic applications. We validate our image reconstruction equations for known sensor geometries with simulated data, and we provide empirical elevation field-of-view, target position, and image quality measurements. The elevation field-of-view was 6.08 mm at a depth of 4 cm and greater than 13 mm at a depth of 5 cm. The target depth agreement with ground truth ranged 98.35-99.69%. The mean lateral and axial target sizes when imaging 600μm-core-diameter optical fibers inserted within the phantoms ranged 0.98-2.14 mm and 1.61-2.24 mm, respectively. The mean ± one standard deviation of lateral and axial target sizes when surrounded by liver tissue were 1.80±0.48 mm and 2.17±0.24 mm, respectively. Contrast, signal-to-noise, and generalized contrast-to-noise ratios ranged 6.92–24.42 dB, 46.50–67.51 dB, and 0.76–1, respectively, within the elevational field-of-view. Results establish the feasibility of implementing photoacoustic-guided surgery with a flexible array transducer.

     
    more » « less
  2. null (Ed.)
    Spectral unmixing techniques for photoacoustic images are often used to isolate signal origins (e.g., blood, contrast agents, lipids). However, these techniques often require many (e.g., 12–59) wavelength transmissions for optimal performance to exploit the optical properties of different biological chromophores. Analysis of the acoustic frequency response of photoacoustic signals has the potential to provide additional discrimination of photoacoustic signals from different materials, with the added benefit of potentially requiring only a few optical wavelength emissions. This study presents our initial results testing this hypothesis in a phantom experiment, given the task of differentiating photoacoustic signals from deoxygenated hemoglobin (Hb) and methylene blue (MB). Coherence-based beamforming, principal component analysis, and nearest neighbor classification were employed to determine ground-truth labels, perform feature extraction, and classify image contents, respectively. The mean ± one standard deviation of classification accuracy was increased from 0.65 ± 0.16 to 0.88 ± 0.17 when increasing the number of wavelength emissions from one to two, respectively. When using an optimal laser wavelength pair of 710–870 nm, the sensitivity and specificity of detecting MB over Hb were 1.00 and 1.00, respectively. Results are highly promising for the differentiation of photoacoustic-sensitive materials with comparable performance to that achieved with more conventional multispectral laser wavelength approaches. 
    more » « less
  3. null (Ed.)
    Tool tip visualization is an essential component of multiple robotic surgical and interventional procedures. In this paper, we introduce a real-time photoacoustic visual servoing system that processes information directly from raw acoustic sensor data, without requiring image formation or segmentation in order to make robot path planning decisions to track and maintain visualization of tool tips. The performance of this novel deep learning-based visual servoing system is compared to that of a visual servoing system which relies on image formation followed by segmentation to make and execute robot path planning decisions. Experiments were conducted with a plastisol phantom, ex vivo tissue, and a needle as the interventional tool. Needle tip tracking performance with the deep learning-based approach outperformed that of the image-based segmentation approach by 67.7% and 55.3% in phantom and ex vivo tissue, respectively. In addition, the deep learning-based system operated within the frame-rate-limiting 10 Hz laser pulse repetition frequency rate, with mean execution times of 75.2 ms and 73.9 ms per acquisition frame with phantom and ex vivo tissue, respectively. These results highlight the benefits of our new approach to integrate deep learning with robotic systems for improved automation and visual servoing of tool tips. 
    more » « less
  4. Photoacoustic imaging–the combination of optics and acoustics to visualize differences in optical absorption – has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.

     
    more » « less