skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Application of Convolutional Neural Networks (CNNs) to Recognize Defects in 3D-Printed Parts
Cracks and pores are two common defects in metallic additive manufacturing (AM) parts. In this paper, deep learning-based image analysis is performed for defect (cracks and pores) classification/detection based on SEM images of metallic AM parts. Three different levels of complexities, namely, defect classification, defect detection and defect image segmentation, are successfully achieved using a simple CNN model, the YOLOv4 model and the Detectron2 object detection library, respectively. The tuned CNN model can classify any single defect as either a crack or pore at almost 100% accuracy. The other two models can identify more than 90% of the cracks and pores in the testing images. In addition to the application of static image analysis, defect detection is also successfully applied on a video which mimics the AM process control images. The trained Detectron2 model can identify almost all the pores and cracks that exist in the original video. This study lays a foundation for future in situ process monitoring of the 3D printing process.  more » « less
Award ID(s):
1946231
PAR ID:
10319336
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Materials
Volume:
14
Issue:
10
ISSN:
1996-1944
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. PurposeThe purpose of this study is to develop a deep learning framework for additive manufacturing (AM), that can detect different defect types without being trained on specific defect data sets and can be applied for real-time process control. Design/methodology/approachThis study develops an explainable artificial intelligence (AI) framework, a zero-bias deep neural network (DNN) model for real-time defect detection during the AM process. In this method, the last dense layer of the DNN is replaced by two consecutive parts, a regular dense layer denoted (L1) for dimensional reduction, and a similarity matching layer (L2) for equal weight and non-biased cosine similarity matching. Grayscale images of 3D printed samples acquired during printing were used as the input to the zero-bias DNN. FindingsThis study demonstrates that the approach is capable of successfully detecting multiple types of defects such as cracks, stringing and warping with high accuracy without any prior training on defective data sets, with an accuracy of 99.5%. Practical implicationsOnce the model is set up, the computational time for anomaly detection is lower than the speed of image acquisition indicating the potential for real-time process control. It can also be used to minimize manual processing in AI-enabled AM. Originality/valueTo the best of the authors’ knowledge, this is the first study to use zero-bias DNN, an explainable AI approach for defect detection in AM. 
    more » « less
  2. Abstract In collaborative additive manufacturing (AM), sharing process data across multiple users can provide small to medium-sized manufacturers (SMMs) with enlarged training data for part certification, facilitating accelerated adoption of metal-based AM technologies. The aggregated data can be used to develop a process-defect model that is more precise, reliable, and adaptable. However, the AM process data often contains printing path trajectory information that can significantly jeopardize intellectual property (IP) protection when shared among different users. In this study, a new adaptive AM data deidentification method is proposed that aims to mask the printing trajectory information in the AM process data in the form of melt pool images. This approach integrates stochastic image augmentation (SIA) and adaptive surrogate image generation (ASIG) via tracking melt pool geometric changes to achieve a tradeoff between AM process data privacy and utility. As a result, surrogate melt pool images are generated with perturbed printing directions. In addition, a convolutional neural network (CNN) classifier is used to evaluate the proposed method regarding privacy gain (i.e., changes in the accuracy of identifying printing orientations) and utility loss (i.e., changes in the ability of detecting process anomalies). The proposed method is validated using data collected from two cylindrical specimens using the directed energy deposition (DED) process. The case study results show that the deidentified dataset significantly improved privacy preservation while sacrificing little data utility, once shared on the cloud-based AM system for collaborative process-defect modeling. 
    more » « less
  3. Metal additive manufacturing (AM) is gaining increasing attention from academia and industry due to its unique advantages compared to the traditional manufacturing process. Parts quality inspection is playing a crucial role in the AM industry, which can be adopted for product improvement. However, the traditional inspection process has relied on manual recognition, which could suffer from low efficiency and potential bias. This study presented a convolutional neural network (CNN) approach toward robust AM quality inspection, such as good quality, crack, gas porosity, and lack of fusion. To obtain the appropriate model, experiments were performed on a series of architectures. Moreover, data augmentation was adopted to deal with data scarcity. L2 regularization (weight decay) and dropout were applied to avoid overfitting. The impact of each strategy was evaluated. The final CNN model achieved an accuracy of 92.1%, and it took 8.01 milliseconds to recognize one image. The CNN model presented here can help in automatic defect recognition in the AM industry. 
    more » « less
  4. Cracks of civil infrastructures, including bridges, dams, roads, and skyscrapers, potentially reduce local stiffness and cause material discontinuities, so as to lose their designed functions and threaten public safety. This inevitable process signifier urgent maintenance issues. Early detection can take preventive measures to prevent damage and possible failure. With the increasing size of image data, machine/deep learning based method have become an important branch in detecting cracks from images. This study is to build an automatic crack detector using the state-of-the-art technique referred to as Mask Regional Convolution Neural Network (R-CNN), which is kind of deep learning. Mask R-CNN technique is a recently proposed algorithm not only for object detection and object localization but also for object instance segmentation of natural images. It is found that the built crack detector is able to perform highly effective and efficient automatic segmentation of a wide range of images of cracks. In addition, this proposed automatic detector could work on videos as well; indicating that this detector based on Mask R-CNN provides a robust and feasible ability on detecting cracks exist and their shapes in real time on-site. 
    more » « less
  5. Abstract Giant star-forming clumps (GSFCs) are areas of intensive star-formation that are commonly observed in high-redshift (z ≳ 1) galaxies but their formation and role in galaxy evolution remain unclear. Observations of low-redshift clumpy galaxy analogues are rare but the availability of wide-field galaxy survey data makes the detection of large clumpy galaxy samples much more feasible. Deep Learning (DL), and in particular Convolutional Neural Networks (CNNs), have been successfully applied to image classification tasks in astrophysical data analysis. However, one application of DL that remains relatively unexplored is that of automatically identifying and localizing specific objects or features in astrophysical imaging data. In this paper, we demonstrate the use of DL-based object detection models to localize GSFCs in astrophysical imaging data. We apply the Faster Region-based Convolutional Neural Network object detection framework (FRCNN) to identify GSFCs in low-redshift (z ≲ 0.3) galaxies. Unlike other studies, we train different FRCNN models on observational data that was collected by the Sloan Digital Sky Survey and labelled by volunteers from the citizen science project ‘Galaxy Zoo: Clump Scout’. The FRCNN model relies on a CNN component as a ‘backbone’ feature extractor. We show that CNNs, that have been pre-trained for image classification using astrophysical images, outperform those that have been pre-trained on terrestrial images. In particular, we compare a domain-specific CNN – ‘Zoobot’ – with a generic classification backbone and find that Zoobot achieves higher detection performance. Our final model is capable of producing GSFC detections with a completeness and purity of ≥0.8 while only being trained on ∼5000 galaxy images. 
    more » « less