skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Title: Recurrent and convolutional neural networks for sequential multispectral optoacoustic tomography ( MSOT ) imaging
Abstract Multispectral optoacoustic tomography (MSOT) is a beneficial technique for diagnosing and analyzing biological samples since it provides meticulous details in anatomy and physiology. However, acquiring high through‐plane resolution volumetric MSOT is time‐consuming. Here, we propose a deep learning model based on hybrid recurrent and convolutional neural networks to generate sequential cross‐sectional images for an MSOT system. This system provides three modalities (MSOT, ultrasound, and optoacoustic imaging of a specific exogenous contrast agent) in a single scan. This study used ICG‐conjugated nanoworms particles (NWs‐ICG) as the contrast agent. Instead of acquiring seven images with a step size of 0.1 mm, we can receive two images with a step size of 0.6 mm as input for the proposed deep learning model. The deep learning model can generate five other images with a step size of 0.1 mm between these two input images meaning we can reduce acquisition time by approximately 71%.  more » « less
Award ID(s):
2237142
PAR ID:
10503399
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Wiley
Date Published:
Journal Name:
Journal of Biophotonics
Volume:
16
Issue:
11
ISSN:
1864-063X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Magnetic particle imaging (MPI) is an emerging noninvasive molecular imaging modality with high sensitivity and specificity, exceptional linear quantitative ability, and potential for successful applications in clinical settings. Computed tomography (CT) is typically combined with the MPI image to obtain more anatomical information. Herein, a deep learning‐based approach for MPI‐CT image segmentation is presented. The dataset utilized in training the proposed deep learning model is obtained from a transgenic mouse model of breast cancer following administration of indocyanine green (ICG)‐conjugated superparamagnetic iron oxide nanoworms (NWs‐ICG) as the tracer. The NWs‐ICG particles progressively accumulate in tumors due to the enhanced permeability and retention (EPR) effect. The proposed deep learning model exploits the advantages of the multihead attention mechanism and the U‐Net model to perform segmentation on the MPI‐CT images, showing superb results. In addition, the model is characterized with a different number of attention heads to explore the optimal number for our custom MPI‐CT dataset. 
    more » « less
  2. Abstract PurposeTo develop a robust single breath‐hold approach for volumetric lung imaging at 0.55T. MethodA balanced‐SSFP (bSSFP) pulse sequence with 3D stack‐of‐spiral (SoS) out‐in trajectory for volumetric lung imaging at 0.55T was implemented. With 2.7× undersampling, the pulse sequence enables imaging during a 17‐s breath‐hold. Image reconstruction is performed using 3D SPIRiT and 3D l1‐Wavelet regularizations. In two healthy volunteers, single breath‐hold SoS out‐in bSSFP was compared against stack‐of‐spiral UTE (spiral UTE) and half‐radial dual‐echo bSSFP (bSTAR), based on signal intensity (SI), blood‐lung parenchyma contrast, and image quality. In six patients with pathologies including lung nodules, fibrosis, emphysema, and air trapping, single breath‐hold SoS out‐in and bSTAR were compared against low‐dose computed tomography (LDCT). ResultsSoS out‐in bSSFP achieved 2‐mm isotropic resolution lung imaging with a single breath‐hold duration of 17 s. SoS out‐in (2‐mm isotropic) provided higher lung parenchyma and blood SI and blood‐lung parenchyma contrast compared to spiral UTE (2.4 × 2.4 × 2.5 mm3) and bSTAR (1.6‐mm isotropic). When comparing SI normalized by voxel size, SoS out‐in has lower lung parenchyma signal, higher blood signal, and a higher blood‐lung parenchyma contrast compared to bSTAR. In patients, SoS out‐in bSSFP was able to identify lung fibrosis and lung nodules of size 4 and 8 mm, and breath‐hold bSTAR was able to identify lung fibrosis and 8 mm nodules. ConclusionSingle breath‐hold volumetric lung imaging at 0.55T with 2‐mm isotropic spatial resolution is feasible using SoS out‐in bSSFP. This approach could be useful for rapid lung disease screening, and in cases where free‐breathing respiratory navigated approaches fail. 
    more » « less
  3. This paper proposes a phase-to-depth deep learning model to repair shadow-induced errors for fringe projection profilometry (FPP). The model comprises two hourglass branches that extract information from texture images and phase maps and fuses the information from the two branches by concatenation and weights. The input of the proposed model contains texture images, masks, and unwrapped phase maps, and the ground truth is the depth map from CAD models. A loss function was chosen to consider image details and structural similarity. The training data contain 1200 samples in the verified virtual FPP system. After training, we conduct experiments on the virtual and real-world scanning data, and the results support the model’s effectiveness. The mean absolute error and the root mean squared error are 1.0279 mm and 1.1898 mm on the validation dataset. In addition, we analyze the influence of ambient light intensity on the model’s performance. Low ambient light limits the model’s performance as the model cannot extract valid information from the completely dark shadow regions in texture images. The contribution of each branch network is also investigated. Features from the texture-dominant branch are leveraged as guidance to remedy shadow-induced errors. Information from the phase-dominant branch network makes accurate predictions for the whole object. Our model provides a good reference for repairing shadow-induced errors in the FPP system. 
    more » « less
  4. Introduction: Computed tomography perfusion (CTP) imaging requires injection of an intravenous contrast agent and increased exposure to ionizing radiation. This process can be lengthy, costly, and potentially dangerous to patients, especially in emergency settings. We propose MAGIC, a multitask, generative adversarial network-based deep learning model to synthesize an entire CTP series from only a non-contrasted CT (NCCT) input. Materials and Methods: NCCT and CTP series were retrospectively retrieved from 493 patients at UF Health with IRB approval. The data were deidentified and all images were resized to 256x256 pixels. The collected perfusion data were analyzed using the RapidAI CT Perfusion analysis software (iSchemaView, Inc. CA) to generate each CTP map. For each subject, 10 CTP slices were selected. Each slice was paired with one NCCT slice at the same location and two NCCT slices at a predefined vertical offset, resulting in 4.3K CTP images and 12.9K NCCT images used for training. The incorporation of a spatial offset into the NCCT input allows MAGIC to more accurately synthesize cerebral perfusive structures, increasing the quality of the generated images. The studies included a variety of indications, including healthy tissue, mild infarction, and severe infarction. The proposed MAGIC model incorporates a novel multitask architecture, allowing for the simultaneous synthesis of four CTP modalities: mean transit time (MTT), cerebral blood flow (CBF), cerebral blood volume (CBV), and time to peak (TTP). We propose a novel Physicians-in-the-loop module in the model's architecture, acting as a tunable layer that allows physicians to manually adjust the amount of anatomic detail present in the synthesized CTP series. Additionally, we propose two novel loss terms: multi-modal connectivity loss and extrema loss. The multi-modal connectivity loss leverages the multi-task nature to assert that the mathematical relationship between MTT, CBF, and CBV is satisfied. The extrema loss aids in learning regions of elevated and decreased activity in each modality, allowing for MAGIC to accurately learn the characteristics of diagnostic regions of interest. Corresponding NCCT and CTP slices were paired along the vertical axis. The model was trained for 100 epochs on a NVIDIA TITAN X GPU. Results and Discussion: The MAGIC model’s performance was evaluated on a sample of 40 patients from the UF Health dataset. Across all CTP modalities, MAGIC was able to accurately produce images with high structural agreement between the entire synthesized and clinical perfusion images (SSIMmean=0.801 , UQImean=0.926). MAGIC was able to synthesize CTP images to accurately characterize cerebral circulatory structures and identify regions of infarct tissue, as shown in Figure 1. A blind binary evaluation was conducted to assess the presence of cerebral infarction in both the synthesized and clinical perfusion images, resulting in the synthesized images correctly predicting the presence of cerebral infarction with 87.5% accuracy. Conclusions: We proposed a MAGIC model whose novel deep learning structures and loss terms enable high-quality synthesis of CTP maps and characterization of circulatory structures solely from NCCT images, potentially eliminating the requirement for the injection of an intravenous contrast agent and elevated radiation exposure during perfusion imaging. This makes MAGIC a beneficial tool in a clinical scenario increasing the overall safety, accessibility, and efficiency of cerebral perfusion and facilitating better patient outcomes. Acknowledgements: This work was partially supported by the National Science Foundation, IIS-1908299 III: Small: Modeling Multi-Level Connectivity of Brain Dynamics + REU Supplement, to the University of Florida. 
    more » « less
  5. Convolutional neural network (CNN), a type of deep learning algorithm, is a powerful tool for analyzing visual images. It has been actively investigated to monitor metal additive manufacturing (AM) processes for quality control and has been proven effective. However, typical CNN algorithms inherently have two issues when used in metal AM processes. First, in many cases, acquiring datasets with sufficient quantity and quality, as well as necessary information, is challenging because of technical difficulties and/or cost intensiveness. Second, determining a near-optimal CNN model takes considerable effort and is time-consuming. This is because the types and quality of datasets can be significantly different with respect to different AM processes and materials. The study proposes a novel concatenated ensemble learning method to obtain a flexible and robust algorithm for in-situ anomaly detection in wire + arc additive manufacturing (WAAM), a type of wire-based direct energy deposition (DED) process. For this, data, as well as machine learning models, were seamlessly integrated to overcome the limitations and difficulties in acquiring sufficient data and finding a near-optimal machine learning model. Using inexpensively obtainable and comprehensive datasets from the WAAM process, the proposed method was investigated and validated. In contrast to the one-dimensional and two-dimensional CNN models’ accuracies of 81.6 % and 88.6 %, respectively, the proposed concatenated ensemble model achieved an accuracy of 98 %. 
    more » « less