Abstract We apply a deep convolutional neural network segmentation model to enable novel automated microstructure segmentation applications for complex microstructures typically evaluated manually and subjectively. We explore two microstructure segmentation tasks in an openly available ultrahigh carbon steel microstructure dataset: segmenting cementite particles in the spheroidized matrix, and segmenting larger fields of view featuring grain boundary carbide, spheroidized particle matrix, particle-free grain boundary denuded zone, and Widmanstätten cementite. We also demonstrate how to combine these data-driven microstructure segmentation models to obtain empirical cementite particle size and denuded zone width distributions from more complex micrographs containing multiple microconstituents. The full annotated dataset is available on materialsdata.nist.gov.
more »
« less
A deep learning approach for complex microstructure inference
Abstract Automated, reliable, and objective microstructure inference from micrographs is essential for a comprehensive understanding of process-microstructure-property relations and tailored materials development. However, such inference, with the increasing complexity of microstructures, requires advanced segmentation methodologies. While deep learning offers new opportunities, an intuition about the required data quality/quantity and a methodological guideline for microstructure quantification is still missing. This, along with deep learning’s seemingly intransparent decision-making process, hampers its breakthrough in this field. We apply a multidisciplinary deep learning approach, devoting equal attention to specimen preparation and imaging, and train distinct U-Net architectures with 30–50 micrographs of different imaging modalities and electron backscatter diffraction-informed annotations. On the challenging task of lath-bainite segmentation in complex-phase steel, we achieve accuracies of 90% rivaling expert segmentations. Further, we discuss the impact of image context, pre-training with domain-extrinsic data, and data augmentation. Network visualization techniques demonstrate plausible model decisions based on grain boundary morphology.
more »
« less
- Award ID(s):
- 1826218
- PAR ID:
- 10333851
- Date Published:
- Journal Name:
- Nature Communications
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2041-1723
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Segmentation of echocardiograms plays an essential role in the quantitative analysis of the heart and helps diagnose cardiac diseases. In the recent decade, deep learning-based approaches have significantly improved the performance of echocardiogram segmentation. Most deep learning-based methods assume that the image to be processed is rectangular in shape. However, typically echocardiogram images are formed within a sector of a circle, with a significant region in the overall rectangular image where there is no data, a result of the ultrasound imaging methodology. This large non-imaging region can influence the training of deep neural networks. In this paper, we propose to use polar transformation to help train deep learning algorithms. Using the r-θ transformation, a significant portion of the non-imaging background is removed, allowing the neural network to focus on the heart image. The segmentation model is trained on both x-y and r-θ images. During inference, the predictions from the x-y and r-θ images are combined using max-voting. We verify the efficacy of our method on the CAMUS dataset with a variety of segmentation networks, encoder networks, and loss functions. The experimental results demonstrate the effectiveness and versatility of our proposed method for improving the segmentation results.more » « less
-
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10–20% (absolute) in both same- and out-of-domain settings and requires 8–20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.more » « less
-
Image segmentation is an essential step in biomedical image analysis. In recent years, deep learning models have achieved signi cant success in segmentation. However, deep learning requires the availability of large annotated data to train these models, which can be challenging in biomedical imaging domain. In this paper, we aim to accomplish biomedical image segmentation with limited labeled data using active learning. We present a deep active learning framework that selects additional data points to be annotated by combining U-Net with an efficient and effective query strategy to capture the most uncertain and representative points. This algorithm decouples the representative part by first finding the core points in the unlabeled pool and then selecting the most uncertain points from the reduced pool, which are different from the labeled pool. In our experiment, only 13% of the dataset was required with active learning to outperform the model trained on the entire 2018 MIC- CAI Brain Tumor Segmentation (BraTS) dataset. Thus, active learning reduced the amount of labeled data required for image segmentation without a signi cant loss in the accuracy.more » « less
-
Predicting materials’ microstructure from the desired properties is critical for exploring new materials. Herein, a novel regression‐based prediction of scanning electron microscopy (SEM) images for the target hardness using generative adversarial networks (GANs) is demonstrated. This article aims at generating realistic SEM micrographs, which contain rich features (e.g., grain and neck shapes, tortuosity, spatial configurations of grain/pores). Together, these features affect material properties but are difficult to predict. A high‐performance GAN, named ‘Microstructure‐GAN’ (or M‐GAN), with residual blocks to significantly improve the details of synthesized micrographs is established . This algorithm was trained with experimentally obtained SEM micrographs of laser‐sintered alumina. After training, the high‐fidelity, feature‐rich micrographs can be predicted for an arbitrary target hardness. Microstructure details such as small pores and grain boundaries can be observed even at the nanometer scale (∼50 nm) in the predicted 1000× micrographs. A pretrained convolutional neural network (CNN) was used to evaluate the accuracy of the predicted micrographs with rich features for specific hardness. The relative bias of the CNN‐evaluated value of the generated micrographs was within 2.1%–2.7% from the values for experimental micrographs. This approach can potentially be applied to other microscopy data, such as atomic force, optical, and transmission electron microscopy.more » « less
An official website of the United States government

