skip to main content


Title: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020
Image synthesis from corrupted contrasts increases the diver- sity of diagnostic information available for many neurological diseases. Recently the image-to-image translation has experienced signi cant lev- els of interest within medical research, beginning with the successful use of the Generative Adversarial Network (GAN) to the introduction of cyclic constraint extended to multiple domains. However, in current ap- proaches, there is no guarantee that the mapping between the two image domains would be unique or one-to-one. In this paper, we introduce a novel approach to unpaired image-to-image translation based on the invertible architecture. The invertible property of the ow-based architecture assures a cycle-consistency of image-to-image translation without additional loss functions. We utilize the temporal informa- tion between consecutive slices to provide more constraints to the optimization for transforming one domain to another in un- paired volumetric medical images. To capture temporal structures in the medical images, we explore the displacement between the consec- utive slices using a deformation eld. In our approach, the deformation eld is used as a guidance to keep the translated slides realistic and con- sistent across the translation. The experimental results have shown that the synthesized images using our proposed approach are able to archive a competitive performance in terms of mean squared error, peak signal- to-noise ratio, and structural similarity index when compared with the existing deep learning-based methods on three standard datasets, i.e. HCP, MRBrainS13 and Brats2019.  more » « less
Award ID(s):
1946391
NSF-PAR ID:
10320012
Author(s) / Creator(s):
Date Published:
Journal Name:
International Conference on Medical Image Computing and Computer-Assisted Intervention
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Pancreatic ductal adenocarcinoma (PDAC) presents a critical global health challenge, and early detection is crucial for improving the 5-year survival rate. Recent medical imaging and computational algorithm advances offer potential solutions for early diagnosis. Deep learning, particularly in the form of convolutional neural networks (CNNs), has demonstrated success in medical image analysis tasks, including classification and segmentation. However, the limited availability of clinical data for training purposes continues to represent a significant obstacle. Data augmentation, generative adversarial networks (GANs), and cross-validation are potential techniques to address this limitation and improve model performance, but effective solutions are still rare for 3D PDAC, where the contrast is especially poor, owing to the high heterogeneity in both tumor and background tissues. In this study, we developed a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of PDAC tumors and pancreatic tissue, which can generate the inter-slice connection data that the existing 2D CT image synthesis models lack. The transition to 3D models allowed the preservation of contextual information from adjacent slices, improving efficiency and accuracy, especially for the poor-contrast challenging case of PDAC. PDAC’s challenging characteristics, such as an iso-attenuating or hypodense appearance and lack of well-defined margins, make tumor shape and texture learning challenging. To overcome these challenges and improve the performance of 3D GAN models, our innovation was to develop a 3D U-Net architecture for the generator, to improve shape and texture learning for PDAC tumors and pancreatic tissue. Thorough examination and validation across many datasets were conducted on the developed 3D GAN model, to ascertain the efficacy and applicability of the model in clinical contexts. Our approach offers a promising path for tackling the urgent requirement for creative and synergistic methods to combat PDAC. The development of this GAN-based model has the potential to alleviate data scarcity issues, elevate the quality of synthesized data, and thereby facilitate the progression of deep learning models, to enhance the accuracy and early detection of PDAC tumors, which could profoundly impact patient outcomes. Furthermore, the model has the potential to be adapted to other types of solid tumors, hence making significant contributions to the field of medical imaging in terms of image processing models.

     
    more » « less
  2. Abstract

    Using medical images to evaluate disease severity and change over time is a routine and important task in clinical decision making. Grading systems are often used, but are unreliable as domain experts disagree on disease severity category thresholds. These discrete categories also do not reflect the underlying continuous spectrum of disease severity. To address these issues, we developed a convolutional Siamese neural network approach to evaluate disease severity at single time points and change between longitudinal patient visits on a continuous spectrum. We demonstrate this in two medical imaging domains: retinopathy of prematurity (ROP) in retinal photographs and osteoarthritis in knee radiographs. Our patient cohorts consist of 4861 images from 870 patients in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) cohort study and 10,012 images from 3021 patients in the Multicenter Osteoarthritis Study (MOST), both of which feature longitudinal imaging data. Multiple expert clinician raters ranked 100 retinal images and 100 knee radiographs from excluded test sets for severity of ROP and osteoarthritis, respectively. The Siamese neural network output for each image in comparison to a pool of normal reference images correlates with disease severity rank (ρ = 0.87 for ROP andρ = 0.89 for osteoarthritis), both within and between the clinical grading categories. Thus, this output can represent the continuous spectrum of disease severity at any single time point. The difference in these outputs can be used to show change over time. Alternatively, paired images from the same patient at two time points can be directly compared using the Siamese neural network, resulting in an additional continuous measure of change between images. Importantly, our approach does not require manual localization of the pathology of interest and requires only a binary label for training (same versus different). The location of disease and site of change detected by the algorithm can be visualized using an occlusion sensitivity map-based approach. For a longitudinal binary change detection task, our Siamese neural networks achieve test set receiving operator characteristic area under the curves (AUCs) of up to 0.90 in evaluating ROP or knee osteoarthritis change, depending on the change detection strategy. The overall performance on this binary task is similar compared to a conventional convolutional deep-neural network trained for multi-class classification. Our results demonstrate that convolutional Siamese neural networks can be a powerful tool for evaluating the continuous spectrum of disease severity and change in medical imaging.

     
    more » « less
  3. Introduction Multi-series CT (MSCT) scans, including non-contrast CT (NCCT), CT Perfusion (CTP), and CT Angiography (CTA), are widely used in acute stroke imaging. While each scan has its advantage in disease diagnosis, the varying image resolution of different series hinders the ability of the radiologist to discern subtle suspicious findings. Besides, higher image quality requires high radiation doses, leading to increases in health risks such as cataract formation and cancer induction. Thus, it is highly crucial to develop an approach to improve MSCT resolution and to lower radiation exposure. Hypothesis MSCT imaging of the same patient is highly correlated in structural features, the transferring and integration of the shared and complementary information from different series are beneficial for achieving high image quality. Methods We propose TL-GAN, a learning-based method by using Transfer Learning (TL) and Generative Adversarial Network (GAN) to reconstruct high-quality diagnostic images. Our TL-GAN method is evaluated on 4,382 images collected from nine patients’ MSCT scans, including 415 NCCT slices, 3,696 CTP slices, and 271 CTA slices. We randomly split the nine patients into a training set (4 patients), a validation set (2 patients), and a testing set (3 patients). In preprocessing, we remove the background and skull and visualize in brain window. The low-resolution images (1/4 of the original spatial size) are simulated by bicubic down-sampling. For training without TL, we train different series individually, and for with TL, we follow the scanning sequence (NCCT, CTP, and CTA) by finetuning. Results The performance of TL-GAN is evaluated by the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) index on 184 NCCT, 882 CTP, and 107 CTA test images. Figure 1 provides both visual (a-c) and quantity (d-f) comparisons. Through TL-GAN, there is a significant improvement with TL than without TL (training from scratch) for NCCT, CTP, and CTA images, respectively. These significances of performance improvement are evaluated by one-tailed paired t-tests (p < 0.05). We enlarge the regions of interest for detail visual comparisons. Further, we evaluate the CTP performance by calculating the perfusion maps, including cerebral blood flow (CBF) and cerebral blood volume (CBV). The visual comparison of the perfusion maps in Figure 2 demonstrate that TL-GAN is beneficial for achieving high diagnostic image quality, which are comparable to the ground truth images for both CBF and CBV maps. Conclusion Utilizing TL-GAN can effectively improve the image resolution for MSCT, provides radiologists more image details for suspicious findings, which is a practical solution for MSCT image quality enhancement. 
    more » « less
  4. This work presents a novel deep learning architecture called BNU-Net for the purpose of cardiac segmentation based on short-axis MRI images. Its name is derived from the Batch Normalized (BN) U-Net architecture for medical image segmentation. New generations of deep neural networks (NN) are called convolutional NN (CNN). CNNs like U-Net have been widely used for image classification tasks. CNNs are supervised training models which are trained to learn hierarchies of features automatically and robustly perform classification. Our architecture consists of an encoding path for feature extraction and a decoding path that enables precise localization. We compare this approach with a parallel approach named U-Net. Both BNU-Net and U-Net are cardiac segmentation approaches: while BNU-Net employs batch normalization to the results of each convolutional layer and applies an exponential linear unit (ELU) approach that operates as activation function, U-Net does not apply batch normalization and is based on Rectified Linear Units (ReLU). The presented work (i) facilitates various image preprocessing techniques, which includes affine transformations and elastic deformations, and (ii) segments the preprocessed images using the new deep learning architecture. We evaluate our approach on a dataset containing 805 MRI images from 45 patients. The experimental results reveal that our approach accomplishes comparable or better performance than other state-of-the-art approaches in terms of the Dice coefficient and the average perpendicular distance. 
    more » « less
  5. This work presents a novel deep learning architecture called BNU-Net for the purpose of cardiac segmentation based on short-axis MRI images. Its name is derived from the Batch Normalized (BN) U-Net architecture for medical image segmentation. New generations of deep neural networks (NN) are called convolutional NN (CNN). CNNs like U-Net have been widely used for image classification tasks. CNNs are supervised training models which are trained to learn hierarchies of features automatically and robustly perform classification. Our architecture consists of an encoding path for feature extraction and a decoding path that enables precise localization. We compare this approach with a parallel approach named U-Net. Both BNU-Net and U-Net are cardiac segmentation approaches: while BNU-Net employs batch normalization to the results of each convolutional layer and applies an exponential linear unit (ELU) approach that operates as activation function, U-Net does not apply batch normalization and is based on Rectified Linear Units (ReLU). The presented work (i) facilitates various image preprocessing techniques, which includes affine transformations and elastic deformations, and (ii) segments the preprocessed images using the new deep learning architecture. We evaluate our approach on a dataset containing 805 MRI images from 45 patients. The experimental results reveal that our approach accomplishes comparable or better performance than other state-of-the-art approaches in terms of the Dice coefficient and the average perpendicular distance. Index Terms—Magnetic Resonance Imaging; Batch Normalization; Exponential Linear Units 
    more » « less