skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Multi‐scale cascaded networks for synthesis of mammogram to decrease intensity distortion and increase model‐based perceptual similarity
Abstract PurposeSynthetic digital mammogram (SDM) is a 2D image generated from digital breast tomosynthesis (DBT) and used as a substitute for a full‐field digital mammogram (FFDM) to reduce the radiation dose for breast cancer screening. The previous deep learning‐based method used FFDM images as the ground truth, and trained a single neural network to directly generate SDM images with similar appearances (e.g., intensity distribution, textures) to the FFDM images. However, the FFDM image has a different texture pattern from DBT. The difference in texture pattern might make the training of the neural network unstable and result in high‐intensity distortion, which makes it hard to decrease intensity distortion and increase perceptual similarity (e.g., generate similar textures) at the same time. Clinically, radiologists want to have a 2D synthesized image that feels like an FFDM image in vision and preserves local structures such as both mass and microcalcifications (MCs) in DBT because radiologists have been trained on reading FFDM images for a long time, while local structures are important for diagnosis. In this study, we proposed to use a deep convolutional neural network to learn the transformation to generate SDM from DBT. MethodTo decrease intensity distortion and increase perceptual similarity, a multi‐scale cascaded network (MSCN) is proposed to generate low‐frequency structures (e.g., intensity distribution) and high‐frequency structures (e.g., textures) separately. The MSCN consist of two cascaded sub‐networks: the first sub‐network is used to predict the low‐frequency part of the FFDM image; the second sub‐network is used to generate a full SDM image with textures similar to the FFDM image based on the prediction of the first sub‐network. The mean‐squared error (MSE) objective function is used to train the first sub‐network, termed low‐frequency network, to generate a low‐frequency SDM image. The gradient‐guided generative adversarial network's objective function is to train the second sub‐network, termed high‐frequency network, to generate a full SDM image with textures similar to the FFDM image. Results1646 cases with FFDM and DBT were retrospectively collected from the Hologic Selenia system for training and validation dataset, and 145 cases with masses or MC clusters were independently collected from the Hologic Selenia system for testing dataset. For comparison, the baseline network has the same architecture as the high‐frequency network and directly generates a full SDM image. Compared to the baseline method, the proposed MSCN improves the peak‐to‐noise ratio from 25.3 to 27.9 dB and improves the structural similarity from 0.703 to 0.724, and significantly increases the perceptual similarity. ConclusionsThe proposed method can stabilize the training and generate SDM images with lower intensity distortion and higher perceptual similarity.  more » « less
Award ID(s):
1912958
PAR ID:
10419304
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Medical Physics
Volume:
50
Issue:
2
ISSN:
0094-2405
Page Range / eLocation ID:
p. 837-853
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Image texture, the relative spatial arrangement of intensity values in an image, encodes valuable information about the scene. As it stands, much of this potential information remains untapped. Understanding how to decipher textural details would afford another method of extracting knowledge of the physical world from images. In this work, we attempt to bridge the gap in research between quantitative texture analysis and the visual perception of textures. The impact of changes in image texture on human observer’s ability to perform signal detection and localization tasks in complex digital images is not understood. We examine this critical question by studying task-based human observer performance in detecting and localizing signals in tomographic breast images. We have also investigated how these changes impact the formation of second-order image texture. We used digital breast tomosynthesis (DBT) an FDA approved tomographic X-ray breast imaging method as the modality of choice to show our preliminary results. Our human observer studies involve localization ROC (LROC) studies for low contrast mass detection in DBT. Simulated images are used as they offer the benefit of known ground truth. Our results prove that changes in system geometry or processing leads to changes in image texture magnitudes. We show that the variations in several well-known texture features estimated in digital images correlate with human observer detection–localization performance for signals embedded in them. This insight can allow efficient and practical techniques to identify the best imaging system design and algorithms or filtering tools by examining the changes in these texture features. This concept linking texture feature estimates and task based image quality assessment can be extended to several other imaging modalities and applications as well. It can also offer feedback in system and algorithm designs with a goal to improve perceptual benefits. Broader impact can be in wide array of areas including imaging system design, image processing, data science, machine learning, computer vision, perceptual and vision science. Our results also point to the caution that must be exercised in using these texture features as image-based radiomic features or as predictive markers for risk assessment as they are sensitive to system or image processing changes. 
    more » « less
  2. Abstract PremiseField images are important sources of information for research in the natural sciences. However, images that lack photogrammetric scale bars, including most iNaturalist observations, cannot yield accurate trait measurements. We introduce FieldPrism, a novel system of photogrammetric markers, QR codes, and software to automate the curation of snapshot vouchers. Methods and ResultsOur photogrammetric background templates (FieldSheets) increase the utility of field images by providing machine‐readable scale bars and photogrammetric reference points to automatically correct image distortion and calculate a pixel‐to‐metric conversion ratio. Users can generate a QR code flipbook derived from a specimen identifier naming hierarchy, enabling machine‐readable specimen identification for automatic file renaming. We also developed FieldStation, a Raspberry Pi–based mobile imaging apparatus that records images, GPS location, and metadata redundantly on up to four USB storage devices and can be monitored and controlled from any Wi‐Fi connected device. ConclusionsFieldPrism is a flexible software tool designed to standardize and improve the utility of images captured in the field. When paired with the optional FieldStation, researchers can create a self‐contained mobile imaging apparatus for quantitative trait data collection. 
    more » « less
  3. Abstract BackgroundLung cancer is the deadliest and second most common cancer in the United States due to the lack of symptoms for early diagnosis. Pulmonary nodules are small abnormal regions that can be potentially correlated to the occurrence of lung cancer. Early detection of these nodules is critical because it can significantly improve the patient's survival rates. Thoracic thin‐sliced computed tomography (CT) scanning has emerged as a widely used method for diagnosing and prognosis lung abnormalities. PurposeThe standard clinical workflow of detecting pulmonary nodules relies on radiologists to analyze CT images to assess the risk factors of cancerous nodules. However, this approach can be error‐prone due to the various nodule formation causes, such as pollutants and infections. Deep learning (DL) algorithms have recently demonstrated remarkable success in medical image classification and segmentation. As an ever more important assistant to radiologists in nodule detection, it is imperative ensure the DL algorithm and radiologist to better understand the decisions from each other. This study aims to develop a framework integrating explainable AI methods to achieve accurate pulmonary nodule detection. MethodsA robust and explainable detection (RXD) framework is proposed, focusing on reducing false positives in pulmonary nodule detection. Its implementation is based on an explanation supervision method, which uses nodule contours of radiologists as supervision signals to force the model to learn nodule morphologies, enabling improved learning ability on small dataset, and enable small dataset learning ability. In addition, two imputation methods are applied to the nodule region annotations to reduce the noise within human annotations and allow the model to have robust attributions that meet human expectations. The 480, 265, and 265 CT image sets from the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC‐IDRI) dataset are used for training, validation, and testing. ResultsUsing only 10, 30, 50, and 100 training samples sequentially, our method constantly improves the classification performance and explanation quality of baseline in terms of Area Under the Curve (AUC) and Intersection over Union (IoU). In particular, our framework with a learnable imputation kernel improves IoU from baseline by 24.0% to 80.0%. A pre‐defined Gaussian imputation kernel achieves an even greater improvement, from 38.4% to 118.8% from baseline. Compared to the baseline trained on 100 samples, our method shows less drop in AUC when trained on fewer samples. A comprehensive comparison of interpretability shows that our method aligns better with expert opinions. ConclusionsA pulmonary nodule detection framework was demonstrated using public thoracic CT image datasets. The framework integrates the robust explanation supervision (RES) technique to ensure the performance of nodule classification and morphology. The method can reduce the workload of radiologists and enable them to focus on the diagnosis and prognosis of the potential cancerous pulmonary nodules at the early stage to improve the outcomes for lung cancer patients. 
    more » « less
  4. Objectives:Magnetic resonance imaging (MRI) using 1.5T or 3.0T systems is routinely employed for assessing wrist pathology; however, due to off-resonance artifacts and high power deposition, these high-field systems have drawbacks for real-time (RT) imaging of the moving wrist. Recently, high-performance 0.55T MRI systems have become available. In this proof-of-concept study, we tested the hypothesis that RT-MRI during continuous, active, and uninterrupted wrist motion is feasible with a high-performance 0.55T system at temporal resolutions below 100 ms and that the resulting images provide visualization of tissues commonly interrogated for assessing dynamic wrist instability. Methods:Participants were scanned during uninterrupted wrist radial-ulnar deviation and clenched fist maneuvers. Resulting images (nominal temporal resolution of 12.7–164.6 ms per image) were assessed for image quality. Feasibility of static MRI to supplement RT-MRI acquisition was also tested. Results:The RT images with temporal resolutions < 100 ms demonstrated low distortion and image artifacts, and higher reader assessment scores. Static MRI scans showed the ability to assess anatomical structures of interest in the wrist. Conclusion:RT-MRI of the wrist at a high temporal resolution, coupled with static MRI, is feasible with a high-performance 0.55T system, and may enable improved assessment of wrist dynamic dysfunction and instability. Advances in knowledge:Real-time MRI of the moving wrist is feasible with high-performance 0.55T and may improve the evaluation of dynamic dysfunction of the wrist. 
    more » « less
  5. While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. Our GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer. In our experiments the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. Our study suggests an imperative need for continuing research on medical AI model’s safety issues and for developing potential defensive solutions against adversarial attacks. 
    more » « less