skip to main content

This content will become publicly available on August 5, 2023

Title: Prediction of recoater crash in laser powder bed fusion additive manufacturing using graph theory thermomechanical modeling
The objective of this work is to predict a type of thermal-induced process failure called recoater crash that occurs frequently during laser powder bed fusion (LPBF) additive manufacturing. Rapid and accurate thermomechanical simulations are valuable for LPBF practitioners to identify and correct potential issues in the part design and processing conditions that may cause recoater crashes. In this work, to predict the likelihood of a recoater crash (recoater contact or impact) we develop and apply a computationally efficient thermomechanical modeling approach based on graph theory. The accuracy and computational efficiency of the approach is demonstrated by comparison with both non-proprietary finite element analysis (Abaqus), and a proprietary LPBF simulation software (Autodesk Netfabb). Based on both numerical (verification) and experimental (validation) studies, the proposed approach is found to be 5 to 6 times faster than the non-proprietary finite element modeling and has the same order of computational time as a commercial simulation software (Netfabb) without sacrificing prediction accuracy.
; ; ; ; ; ;
Award ID(s):
1929172 1752069 2044710 2020246
Publication Date:
Journal Name:
Progress in Additive Manufacturing
Sponsoring Org:
National Science Foundation
More Like this
  1. Despite its potential to overcome the design and processing barriers of traditional subtractive and formative manufacturing techniques, the use of laser powder bed fusion (LPBF) metal additive manufacturing is currently limited due to its tendency to create flaws. A multitude of LPBF-related flaws, such as part-level deformation, cracking, and porosity are linked to the spatiotemporal temperature distribution in the part during the process. The temperature distribution, also called the thermal history, is a function of several factors encompassing material properties, part geometry and orientation, processing parameters, placement of supports, among others. These broad range of factors are difficult and expensive to optimize through empirical testing alone. Consequently, fast and accurate models to predict the thermal history are valuable for mitigating flaw formation in LPBF-processed parts. In our prior works, we developed a graph theory-based approach for predicting the temperature distribution in LPBF parts. This mesh-free approach was compared with both non-proprietary and commercial finite element packages, and the thermal history predictions were experimentally validated with in- situ infrared thermal imaging data. It was found that the graph theory-derived thermal history predictions converged within 30–50% of the time of non-proprietary finite element analysis for a similar level of prediction error. However,more »these prior efforts were based on small prismatic and cylinder-shaped LPBF parts. In this paper, our objective was to scale the graph theory approach to predict the thermal history of large volume, complex geometry LPBF parts. To realize this objective, we developed and applied three computational strategies to predict the thermal history of a stainless steel (SAE 316L) impeller having outside diameter 155 mm and vertical height 35 mm (700 layers). The impeller was processed on a Renishaw AM250 LPBF system and required 16 h to complete. During the process, in-situ layer-by-layer steady state surface temperature measurements for the impeller were obtained using a calibrated longwave infrared thermal camera. As an example of the outcome, on implementing one of the three strategies reported in this work, which did not reduce or simplify the part geometry, the thermal history of the impeller was predicted with approximate mean absolute error of 6% (standard deviation 0.8%) and root mean square error 23 K (standard deviation 3.7 K). Moreover, the thermal history was simulated within 40 min using desktop computing, which is considerably less than the 16 h required to build the impeller part. Furthermore, the graph theory thermal history predictions were compared with a proprietary LPBF thermal modeling software and non-proprietary finite element simulation. For a similar level of root mean square error (28 K), the graph theory approach converged in 17 min, vs. 4.5 h for non-proprietary finite element analysis.« less
  2. Abstract

    The goal of this work is to predict the effect of part geometry and process parameters on the instantaneous spatial distribution of heat, called the heat flux or thermal history, in metal parts as they are being built layer-by-layer using additive manufacturing (AM) processes. In pursuit of this goal, the objective of this work is to develop and verify a graph theory-based approach for predicting the heat flux in metal AM parts. This objective is consequential to overcome the current poor process consistency and part quality in AM. One of the main reasons for poor part quality in metal AM processes is ascribed to the heat flux in the part. For instance, constrained heat flux because of ill-considered part design leads to defects, such as warping and thermal stress-induced cracking. Existing non-proprietary approaches to predict the heat flux in AM at the part-level predominantly use mesh-based finite element analyses that are computationally tortuous — the simulation of a few layers typically requires several hours, if not days. Hence, to alleviate these challenges in metal AM processes, there is a need for efficient computational thermal models to predict the heat flux, and thereby guide part design and selection of processmore »parameters instead of expensive empirical testing. Compared to finite element analysis techniques, the proposed mesh-free graph theory-based approach facilitates layer-by-layer simulation of the heat flux within a few minutes on a desktop computer. To explore these assertions we conducted the following two studies: (1) comparing the heat diffusion trends predicted using the graph theory approach, with finite element analysis and analytical heat transfer calculations based on Green’s functions for an elementary cuboid geometry which is subjected to an impulse heat input in a certain part of its volume, and (2) simulating the layer-by-layer deposition of three part geometries in a laser powder bed fusion metal AM process with: (a) Goldak’s moving heat source finite element method, (b) the proposed graph theory approach, and (c) further comparing the heat flux predictions from the last two approaches with a commercial solution. From the first study we report that the heat flux trend approximated by the graph theory approach is found to be accurate within 5% of the Green’s functions-based analytical solution (in terms of the symmetric mean absolute percentage error). Results from the second study show that the heat flux trends predicted for the AM parts using graph theory approach agrees with finite element analysis with error less than 15%. More pertinently, the computational time for predicting the heat flux was significantly reduced with graph theory, for instance, in one of the AM case studies the time taken to predict the heat flux in a part was less than 3 minutes using the graph theory approach compared to over 3 hours with finite element analysis. While this paper is restricted to theoretical development and verification of the graph theory approach for heat flux prediction, our forthcoming research will focus on experimental validation through in-process sensor-based heat flux measurements.

    « less
  3. Purpose The purpose of this paper is to develop, apply and validate a mesh-free graph theory–based approach for rapid thermal modeling of the directed energy deposition (DED) additive manufacturing (AM) process. Design/methodology/approach In this study, the authors develop a novel mesh-free graph theory–based approach to predict the thermal history of the DED process. Subsequently, the authors validated the graph theory predicted temperature trends using experimental temperature data for DED of titanium alloy parts (Ti-6Al-4V). Temperature trends were tracked by embedding thermocouples in the substrate. The DED process was simulated using the graph theory approach, and the thermal history predictions were validated based on the data from the thermocouples. Findings The temperature trends predicted by the graph theory approach have mean absolute percentage error of approximately 11% and root mean square error of 23°C when compared to the experimental data. Moreover, the graph theory simulation was obtained within 4 min using desktop computing resources, which is less than the build time of 25 min. By comparison, a finite element–based model required 136 min to converge to similar level of error. Research limitations/implications This study uses data from fixed thermocouples when printing thin-wall DED parts. In the future, the authors will incorporate infrared thermal cameramore »data from large parts. Practical implications The DED process is particularly valuable for near-net shape manufacturing, repair and remanufacturing applications. However, DED parts are often afflicted with flaws, such as cracking and distortion. In DED, flaw formation is largely governed by the intensity and spatial distribution of heat in the part during the process, often referred to as the thermal history. Accordingly, fast and accurate thermal models to predict the thermal history are necessary to understand and preclude flaw formation. Originality/value This paper presents a new mesh-free computational thermal modeling approach based on graph theory (network science) and applies it to DED. The approach eschews the tedious and computationally demanding meshing aspect of finite element modeling and allows rapid simulation of the thermal history in additive manufacturing. Although the graph theory has been applied to thermal modeling of laser powder bed fusion (LPBF), there are distinct phenomenological differences between DED and LPBF that necessitate substantial modifications to the graph theory approach.« less
  4. Abstract Computational approaches, especially finite element analysis (FEA), have been rapidly growing in both academia and industry during the last few decades. FEA serves as a powerful and efficient approach for simulating real-life experiments, including industrial product development, machine design, and biomedical research, particularly in biomechanics and biomaterials. Accordingly, FEA has been a “go-to” high biofidelic software tool to simulate and quantify the biomechanics of the foot–ankle complex, as well as to predict the risk of foot and ankle injuries, which are one of the most common musculoskeletal injuries among physically active individuals. This paper provides a review of the in silico FEA of the foot–ankle complex. First, a brief history of computational modeling methods and finite element (FE) simulations for foot–ankle models is introduced. Second, a general approach to build an FE foot and ankle model is presented, including a detailed procedure to accurately construct, calibrate, verify, and validate an FE model in its appropriate simulation environment. Third, current applications, as well as future improvements of the foot and ankle FE models, especially in the biomedical field, are discussed. Finally, a conclusion is made on the efficiency and development of FEA as a computational approach in investigating the biomechanicsmore »of the foot–ankle complex. Overall, this review integrates insightful information for biomedical engineers, medical professionals, and researchers to conduct more accurate research on the foot–ankle FE models in the future.« less
  5. The current work centers on multi-scale approaches to simulate and predict metallic nano-layers’ thermomechanical responses in crystal plasticity large deformation finite element platforms. The study is divided into two major scales: nano- and homogenized levels where Cu/Nb nano-layers are designated as case studies. At the nano-scale, a size-dependent constitutive model based on entropic kinetics is developed. A deep-learning adaptive boosting technique named single layer calibration is established to acquire associated constitutive parameters through a single process applicable to a broad range of setups entirely different from those of the calibration. The model is validated through experimental data with solid agreement followed by the behavioral predictions of multiple cases regarding size, loading pattern, layer type, and geometrical combination effects for which the performances are discussed. At the homogenized scale, founded on statistical analyses of microcanonical ensembles, a homogenized crystal plasticity-based constitutive model is developed with the aim of expediting while retaining the accuracy of computational processes. Accordingly, effective constitutive functionals are realized where the associated constants are obtained via metaheuristic genetic algorithms. The model is favorably verified with nano-scale data while accelerating the computational processes by several orders of magnitude. Ultimately, a temperature-dependent homogenized constitutive model is developed where the effectivemore »constitutive functionals along with the associated constants are determined. The model is validated by experimental data with which multiple demonstrations of temperature effects are assessed and analyzed.« less