skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Validation of Deep Learning Segmentation of CT Images of Fiber-Reinforced Composites
Micro-computed tomography (µCT) is a valuable tool for visualizing microstructures and damage in fiber-reinforced composites. However, the large sets of data generated by µCT present a barrier to extracting quantitative information. Deep learning models have shown promise for overcoming this barrier by enabling automated segmentation of features of interest from the images. However, robust validation methods have not yet been used to quantify the success rate of the models and the ability to extract accurate measurements from the segmented image. In this paper, we evaluate the detection rate for segmenting fibers in low-contrast CT images using a deep learning model with three different approaches for defining the reference (ground-truth) image. The feasibility of measuring sub-pixel feature dimensions from the µCT image, in certain cases where the µCT image intensity is dependent on the feature dimensions, is assessed and calibrated using a higher-resolution image from a polished cross-section of the test specimen in the same location as the µCT image.  more » « less
Award ID(s):
1743701
PAR ID:
10373785
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of Composites Science
Volume:
6
Issue:
2
ISSN:
2504-477X
Page Range / eLocation ID:
60
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Cutting edge deep learning techniques allow for image segmentation with great speed and accuracy. However, application to problems in materials science is often difficult since these complex models may have difficultly learning meaningful image features that would enable extension to new datasets. In situ electron microscopy provides a clear platform for utilizing automated image analysis. In this work, we consider the case of studying coarsening dynamics in supported nanoparticles, which is important for understanding, for example, the degradation of industrial catalysts. By systematically studying dataset preparation, neural network architecture, and accuracy evaluation, we describe important considerations in applying deep learning to physical applications, where generalizable and convincing models are required. With a focus on unique challenges that arise in high-resolution images, we propose methods for optimizing performance of image segmentation using convolutional neural networks, critically examining the application of complex deep learning models in favor of motivating intentional process design. 
    more » « less
  2. Deep learning algorithms have been successfully adopted to extract meaningful information from digital images, yet many of them have been untapped in the semantic image segmentation of histopathology images. In this paper, we propose a deep convolutional neural network model that strengthens Atrous separable convolutions with a high rate within spatial pyramid pooling for histopathology image segmentation. A well-known model called DeepLabV3Plus was used for the encoder and decoder process. ResNet50 was adopted for the encoder block of the model which provides us the advantage of attenuating the problem of the increased depth of the network by using skip connections. Three Atrous separable convolutions with higher rates were added to the existing Atrous separable convolutions. We conducted a performance evaluation on three tissue types: tumor, tumor-infiltrating lymphocytes, and stroma for comparing the proposed model with the eight state-of-the-art deep learning models: DeepLabV3, DeepLabV3Plus, LinkNet, MANet, PAN, PSPnet, UNet, and UNet++. The performance results show that the proposed model outperforms the eight models on mIOU (0.8058/0.7792) and FSCR (0.8525/0.8328) for both tumor and tumor-infiltrating lymphocytes. 
    more » « less
  3. Abstract Sinkholes are the most abundant surface features in karst areas worldwide. Understanding sinkhole occurrences and characteristics is critical for studying karst aquifers and mitigating sinkhole‐related hazards. Most sinkholes appear on the land surface as depressions or cover collapses and are commonly mapped from elevation data, such as digital elevation models (DEMs). Existing methods for identifying sinkholes from DEMs often require two steps: locating surface depressions and separating sinkholes from non‐sinkhole depressions. In this study, we explored deep learning to directly identify sinkholes from DEM data and aerial imagery. A key contribution of our study is an evaluation of various ways of integrating these two types of raster data. We used an image segmentation model, U‐Net, to locate sinkholes. We trained separate U‐Net models based on four input images of elevation data: a DEM image, a slope image, a DEM gradient image, and a DEM‐shaded relief image. Three normalization techniques (Global, Gaussian, and Instance) were applied to improve the model performance. Model results suggest that deep learning is a viable method to identify sinkholes directly from the images of elevation data. In particular, DEM gradient data provided the best input for U‐net image segmentation models to locate sinkholes. The model using the DEM gradient image with Gaussian normalization achieved the best performance with a sinkhole intersection‐over‐union (IoU) of 45.38% on the unseen test set. Aerial images, however, were not useful in training deep learning models for sinkholes as the models using an aerial image as input achieved sinkhole IoUs below 3%. 
    more » « less
  4. Deep learning models have demonstrated significant advantages over traditional algorithms in image processing tasks like object detection. However, a large amount of data are needed to train such deep networks, which limits their application to tasks such as biometric recognition that require more training samples for each class (i.e., each individual). Researchers developing such complex systems rely on real biometric data, which raises privacy concerns and is restricted by the availability of extensive, varied datasets. This paper proposes a generative adversarial network (GAN)-based solution to produce training data (palm images) for improved biometric (palmprint-based) recognition systems. We investigate the performance of the most recent StyleGAN models in generating a thorough contactless palm image dataset for application in biometric research. Training on publicly available H-PolyU and IIDT palmprint databases, a total of 4839 images were generated using StyleGAN models. SIFT (Scale-Invariant Feature Transform) was used to find uniqueness and features at different sizes and angles, which showed a similarity score of 16.12% with the most recent StyleGAN3-based model. For the regions of interest (ROIs) in both the palm and finger, the average similarity scores were 17.85%. We present the Frechet Inception Distance (FID) of the proposed model, which achieved a 16.1 score, demonstrating significant performance. These results demonstrated StyleGAN as effective in producing unique synthetic biometric images. 
    more » « less
  5. Urban and environmental researchers seek to obtain building features (e.g., building shapes, counts, and areas) at large scales. However, blurriness, occlusions, and noise from prevailing satellite images severely hinder the performance of image segmentation, super-resolution, or deep-learning-based translation networks. In this article, we combine globally available satellite images and spatial geometric feature datasets to create a generative modeling framework that enables obtaining significantly improved accuracy in per-building feature estimation and the generation of visually plausible building footprints. Our approach is a novel design that compensates for the degradation present in satellite images by using a novel deep network setup that includes segmentation, generative modeling, and adversarial learning for instance-level building features. Our method has proven its robustness through large-scale prototypical experiments covering heterogeneous scenarios from dense urban to sparse rural. Results show better quality over advanced segmentation networks for urban and environmental planning, and show promise for future continental-scale urban applications. 
    more » « less