skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 15 until 2:00 AM ET on Friday, January 16 due to maintenance. We apologize for the inconvenience.


Title: A machine and human reader study on AI diagnosis model safety under attacks of adversarial images
While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. Our GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer. In our experiments the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. Our study suggests an imperative need for continuing research on medical AI model’s safety issues and for developing potential defensive solutions against adversarial attacks.  more » « less
Award ID(s):
2115082
PAR ID:
10345703
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Nature communications
Volume:
12
Issue:
7281
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Yang, DN; Xie, X; Tseng, VS; Pei, J; Huang, JW; Lin, JCW (Ed.)
    Extensive research in Medical Imaging aims to uncover critical diagnostic features in patients, with AI-driven medical diagnosis relying on sophisticated machine learning and deep learning models to analyze, detect, and identify diseases from medical images. Despite the remarkable accuracy of these models under normal conditions, they grapple with trustworthiness issues, where their output could be manipulated by adversaries who introduce strategic perturbations to the input images. Furthermore, the scarcity of publicly available medical images, constituting a bottleneck for reliable training, has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images—a practice referred to as transfer learning. However, a significant domain discrepancy exists between natural and medical images, which causes AI models resulting from transfer learning to exhibit heightened vulnerability to adversarial attacks. This paper proposes a domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion. We systematically analyze the performance of transfer learning in the face of various adversarial attacks under different data modalities, with the overarching goal of fortifying the model’s robustness and security in medical imaging tasks. The results demonstrate high effectiveness in reducing attack efficacy, contributing toward more trustworthy transfer learning in biomedical applications. 
    more » « less
  2. Yang, DN; Xie, X; Tseng, VS; Pei, J; Huang, JW; Lin, JCW (Ed.)
    Extensive research in Medical Imaging aims to uncover critical diagnostic features in patients, with AI-driven medical diagnosis relying on sophisticated machine learning and deep learning models to analyze, detect, and identify diseases from medical images. Despite the remarkable accuracy of these models under normal conditions, they grapple with trustworthiness issues, where their output could be manipulated by adversaries who introduce strategic perturbations to the input images. Furthermore, the scarcity of publicly available medical images, constituting a bottleneck for reliable training, has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images—a practice referred to as transfer learning. However, a significant domain discrepancy exists between natural and medical images, which causes AI models resulting from transfer learning to exhibit heightened vulnerability to adversarial attacks. This paper proposes a domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion. We systematically analyze the performance of transfer learning in the face of various adversarial attacks under different data modalities, with the overarching goal of fortifying the model’s robustness and security in medical imaging tasks. The results demonstrate high effectiveness in reducing attack efficacy, contributing toward more trustworthy transfer learning in biomedical applications. 
    more » « less
  3. Pancreatic ductal adenocarcinoma (PDAC) presents a critical global health challenge, and early detection is crucial for improving the 5-year survival rate. Recent medical imaging and computational algorithm advances offer potential solutions for early diagnosis. Deep learning, particularly in the form of convolutional neural networks (CNNs), has demonstrated success in medical image analysis tasks, including classification and segmentation. However, the limited availability of clinical data for training purposes continues to represent a significant obstacle. Data augmentation, generative adversarial networks (GANs), and cross-validation are potential techniques to address this limitation and improve model performance, but effective solutions are still rare for 3D PDAC, where the contrast is especially poor, owing to the high heterogeneity in both tumor and background tissues. In this study, we developed a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of PDAC tumors and pancreatic tissue, which can generate the inter-slice connection data that the existing 2D CT image synthesis models lack. The transition to 3D models allowed the preservation of contextual information from adjacent slices, improving efficiency and accuracy, especially for the poor-contrast challenging case of PDAC. PDAC’s challenging characteristics, such as an iso-attenuating or hypodense appearance and lack of well-defined margins, make tumor shape and texture learning challenging. To overcome these challenges and improve the performance of 3D GAN models, our innovation was to develop a 3D U-Net architecture for the generator, to improve shape and texture learning for PDAC tumors and pancreatic tissue. Thorough examination and validation across many datasets were conducted on the developed 3D GAN model, to ascertain the efficacy and applicability of the model in clinical contexts. Our approach offers a promising path for tackling the urgent requirement for creative and synergistic methods to combat PDAC. The development of this GAN-based model has the potential to alleviate data scarcity issues, elevate the quality of synthesized data, and thereby facilitate the progression of deep learning models, to enhance the accuracy and early detection of PDAC tumors, which could profoundly impact patient outcomes. Furthermore, the model has the potential to be adapted to other types of solid tumors, hence making significant contributions to the field of medical imaging in terms of image processing models. 
    more » « less
  4. Generative adversarial networks (GANs) have recently been proposed as a potentially disruptive approach to generative design due to their remarkable ability to generate visually appealing and realistic samples. Yet, we show that the current generator-discriminator architecture inherently limits the ability of GANs as a design concept generation (DCG) tool. Specifically, we conduct a DCG study on a large-scale dataset based on a GAN architecture to advance the understanding of the performance of these generative models in generating novel and diverse samples. Our findings, derived from a series of comprehensive and objective assessments, reveal that while the traditional GAN architecture can generate realistic samples, the generated and style-mixed samples closely resemble the training dataset, exhibiting significantly low creativity. We propose a new generic architecture for DCG with GANs (DCG-GAN) that enables GAN-based generative processes to be guided by geometric conditions and criteria such as novelty, diversity and desirability. We validate the performance of the DCG-GAN model through a rigorous quantitative assessment procedure and an extensive qualitative assessment involving 89 participants. We conclude by providing several future research directions and insights for the engineering design community to realize the untapped potential of GANs for DCG. 
    more » « less
  5. Abstract Though consistently shown to detect mammographically occult cancers, breast ultrasound has been noted to have high false-positive rates. In this work, we present an AI system that achieves radiologist-level accuracy in identifying breast cancer in ultrasound images. Developed on 288,767 exams, consisting of 5,442,907 B-mode and Color Doppler images, the AI achieves an area under the receiver operating characteristic curve (AUROC) of 0.976 on a test set consisting of 44,755 exams. In a retrospective reader study, the AI achieves a higher AUROC than the average of ten board-certified breast radiologists (AUROC: 0.962 AI, 0.924 ± 0.02 radiologists). With the help of the AI, radiologists decrease their false positive rates by 37.3% and reduce requested biopsies by 27.8%, while maintaining the same level of sensitivity. This highlights the potential of AI in improving the accuracy, consistency, and efficiency of breast ultrasound diagnosis. 
    more » « less