skip to main content


Title: Atomistic simulation assisted error-inclusive Bayesian machine learning for probabilistically unraveling the mechanical properties of solidified metals
Abstract

Solidification phenomenon has been an integral part of the manufacturing processes of metals, where the quantification of stochastic variations and manufacturing uncertainties is critically important. Accurate molecular dynamics (MD) simulations of metal solidification and the resulting properties require excessive computational expenses for probabilistic stochastic analyses where thousands of random realizations are necessary. The adoption of inadequate model sizes and time scales in MD simulations leads to inaccuracies in each random realization, causing a large cumulative statistical error in the probabilistic results obtained through Monte Carlo (MC) simulations. In this work, we present a machine learning (ML) approach, as a data-driven surrogate to MD simulations, which only needs a few MD simulations. This efficient yet high-fidelity ML approach enables MC simulations for full-scale probabilistic characterization of solidified metal properties considering stochasticity in influencing factors like temperature and strain rate. Unlike conventional ML models, the proposed hybrid polynomial correlated function expansion here, being a Bayesian ML approach, is data efficient. Further, it can account for the effect of uncertainty in training data by exploiting mean and standard deviation of the MD simulations, which in principle addresses the issue of repeatability in stochastic simulations with low variance. Stochastic numerical results for solidified aluminum are presented here based on complete probabilistic uncertainty quantification of mechanical properties like Young’s modulus, yield strength and ultimate strength, illustrating that the proposed error-inclusive data-driven framework can reasonably predict the properties with a significant level of computational efficiency.

 
more » « less
NSF-PAR ID:
10487409
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
npj Computational Materials
Volume:
10
Issue:
1
ISSN:
2057-3960
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose There is recent emphasis on designing new materials and alloys specifically for metal additive manufacturing (AM) processes, in contrast to AM of existing alloys that were developed for other traditional manufacturing methods involving considerably different physics. Process optimization to determine processing recipes for newly developed materials is expensive and time-consuming. The purpose of the current work is to use a systematic printability assessment framework developed by the co-authors to determine windows of processing parameters to print defect-free parts from a binary nickel-niobium alloy (NiNb5) using laser powder bed fusion (LPBF) metal AM. Design/methodology/approach The printability assessment framework integrates analytical thermal modeling, uncertainty quantification and experimental characterization to determine processing windows for NiNb5 in an accelerated fashion. Test coupons and mechanical test samples were fabricated on a ProX 200 commercial LPBF system. A series of density, microstructure and mechanical property characterization was conducted to validate the proposed framework. Findings Near fully-dense parts with more than 99% density were successfully printed using the proposed framework. Furthermore, the mechanical properties of as-printed parts showed low variability, good tensile strength of up to 662 MPa and tensile ductility 51% higher than what has been reported in the literature. Originality/value Although many literature studies investigate process optimization for metal AM, there is a lack of a systematic printability assessment framework to determine manufacturing process parameters for newly designed AM materials in an accelerated fashion. Moreover, the majority of existing process optimization approaches involve either time- and cost-intensive experimental campaigns or require the use of proprietary computational materials codes. Through the use of a readily accessible analytical thermal model coupled with statistical calibration and uncertainty quantification techniques, the proposed framework achieves both efficiency and accessibility to the user. Furthermore, this study demonstrates that following this framework results in printed parts with low degrees of variability in their mechanical properties. 
    more » « less
  2. Abstract

    Uncertainty quantification (UQ) in metal additive manufacturing (AM) has attracted tremendous interest in order to dramatically improve product reliability. Model-based UQ, which relies on the validity of a computational model, has been widely explored as a potential substitute for the time-consuming and expensive UQ solely based on experiments. However, its adoption in the practical AM process requires overcoming two main challenges: (1) the inaccurate knowledge of uncertainty sources and (2) the intrinsic uncertainty associated with the computational model. Here, we propose a data-driven framework to tackle these two challenges by combining high throughput physical/surrogate model simulations and the AM-Bench experimental data from the National Institute of Standards and Technology (NIST). We first construct a surrogate model, based on high throughput physical simulations, for predicting the three-dimensional (3D) melt pool geometry and its uncertainty with respect to AM parameters and uncertainty sources. We then employ a sequential Bayesian calibration method to perform experimental parameter calibration and model correction to significantly improve the validity of the 3D melt pool surrogate model. The application of the calibrated melt pool model to UQ of the porosity level, an important quality factor, of AM parts, demonstrates its potential use in AM quality control. The proposed UQ framework can be generally applicable to different AM processes, representing a significant advance toward physics-based quality control of AM products.

     
    more » « less
  3. Abstract

    Probabilistic (p-) computing is a physics-based approach to addressing computational problems which are difficult to solve by conventional von Neumann computers. A key requirement for p-computing is the realization of fast, compact, and energy-efficient probabilistic bits. Stochastic magnetic tunnel junctions (MTJs) with low energy barriers, where the relative dwell time in each state is controlled by current, have been proposed as a candidate to implement p-bits. This approach presents challenges due to the need for precise control of a small energy barrier across large numbers of MTJs, and due to the need for an analog control signal. Here we demonstrate an alternative p-bit design based on perpendicular MTJs that uses the voltage-controlled magnetic anisotropy (VCMA) effect to create the random state of a p-bit on demand. The MTJs are stable (i.e. have large energy barriers) in the absence of voltage, and VCMA-induced dynamics are used to generate random numbers in less than 10 ns/bit. We then show a compact method of implementing p-bits by using VC-MTJs without a bias current. As a demonstration of the feasibility of the proposed p-bits and high quality of the generated random numbers, we solve up to 40 bit integer factorization problems using experimental bit-streams generated by VC-MTJs. Our proposal can impact the development of p-computers, both by supporting a fully spintronic implementation of a p-bit, and alternatively, by enabling true random number generation at low cost for ultralow-power and compact p-computers implemented in complementary metal-oxide semiconductor chips.

     
    more » « less
  4. INTRODUCTION Solving quantum many-body problems, such as finding ground states of quantum systems, has far-reaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, fault-tolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum many-body physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediate-size experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform non-ML algorithms in challenging quantum many-body problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict ground-state properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexity-theoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum many-body problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; ground-state properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomial-time (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomial-time classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum many-body problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of near-term quantum platforms. Classical algorithms for quantum many-body problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task. 
    more » « less
  5. Summary

    This paper presents an approach for efficient uncertainty analysis (UA) using an intrusive generalized polynomial chaos (gPC) expansion. The key step of the gPC‐based uncertainty quantification(UQ) is the stochastic Galerkin (SG) projection, which can convert a stochastic model into a set of coupled deterministic models. The SG projection generally yields a high‐dimensional integration problem with respect to the number of random variables used to describe the parametric uncertainties in a model. However, when the number of uncertainties is large and when the governing equation of the system is highly nonlinear, the SG approach‐based gPC can be challenging to derive explicit expressions for the gPC coefficients because of the low convergence in the SG projection. To tackle this challenge, we propose to use a bivariate dimension reduction method (BiDRM) in this work to approximate a high‐dimensional integral in SG projection with a few one‐ and two‐dimensional integrations. The efficiency of the proposed method is demonstrated with three different examples, including chemical reactions and cell signaling. As compared to other UA methods, such as the Monte Carlo simulations and nonintrusive stochastic collocation (SC), the proposed method shows its superior performance in terms of computational efficiency and UA accuracy.

     
    more » « less