skip to main content

This content will become publicly available on December 1, 2023

Title: Uncertainty-aware mixed-variable machine learning for materials design
Abstract Data-driven design shows the promise of accelerating materials discovery but is challenging due to the prohibitive cost of searching the vast design space of chemistry, structure, and synthesis methods. Bayesian optimization (BO) employs uncertainty-aware machine learning models to select promising designs to evaluate, hence reducing the cost. However, BO with mixed numerical and categorical variables, which is of particular interest in materials design, has not been well studied. In this work, we survey frequentist and Bayesian approaches to uncertainty quantification of machine learning with mixed variables. We then conduct a systematic comparative study of their performances in BO using a popular representative model from each group, the random forest-based Lolo model (frequentist) and the latent variable Gaussian process model (Bayesian). We examine the efficacy of the two models in the optimization of mathematical functions, as well as properties of structural and functional materials, where we observe performance differences as related to problem dimensionality and complexity. By investigating the machine learning models’ predictive and uncertainty estimation capabilities, we provide interpretations of the observed performance differences. Our results provide practical guidance on choosing between frequentist and Bayesian uncertainty-aware machine learning models for mixed-variable BO in materials design.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Scientific Reports
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We address uncertainty quantification for Gaussian processes (GPs) under misspecified priors, with an eye towards Bayesian Optimization (BO). GPs are widely used in BO because they easily enable exploration based on posterior uncertainty bands. However, this convenience comes at the cost of robustness: a typical function encountered in practice is unlikely to have been drawn from the data scientist’s prior, in which case uncertainty estimates can be misleading, and the resulting exploration can be suboptimal. We present a frequentist approach to GP/BO uncertainty quantification. We utilize the GP framework as a working model, but do not assume correctness of the prior. We instead construct a \emph{confidence sequence} (CS) for the unknown function using martingale techniques. There is a necessary cost to achieving robustness: if the prior was correct, posterior GP bands are narrower than our CS. Nevertheless, when the prior is wrong, our CS is statistically valid and empirically outperforms standard GP methods, in terms of both coverage and utility for BO. Additionally, we demonstrate that powered likelihoods provide robustness against model misspecification. 
    more » « less
  2. Abstract

    Machine learning interatomic potentials (IPs) can provide accuracy close to that of first-principles methods, such as density functional theory (DFT), at a fraction of the computational cost. This greatly extends the scope of accurate molecular simulations, providing opportunities for quantitative design of materials and devices on scales hitherto unreachable by DFT methods. However, machine learning IPs have a basic limitation in that they lack a physical model for the phenomena being predicted and therefore have unknown accuracy when extrapolating outside their training set. In this paper, we propose a class of Dropout Uncertainty Neural Network (DUNN) potentials that provide rigorous uncertainty estimates that can be understood from both Bayesian and frequentist statistics perspectives. As an example, we develop a DUNN potential for carbon and show how it can be used to predict uncertainty for static and dynamical properties, including stress and phonon dispersion in graphene. We demonstrate two approaches to propagate uncertainty in the potential energy and atomic forces to predicted properties. In addition, we show that DUNN uncertainty estimates can be used to detect configurations outside the training set, and in some cases, can serve as a predictor for the accuracy of a calculation.

    more » « less
  3. With an unprecedented combination of mechanical and electrical properties, polymer nanocomposites have the potential to be widely used across multiple industries. Tailoring nanocomposites to meet application specific requirements remains a challenging task, owing to the vast, mixed-variable design space that includes composition ( i.e. choice of polymer, nanoparticle, and surface modification) and microstructures ( i.e. dispersion and geometric arrangement of particles) of the nanocomposite material. Modeling properties of the interphase, the region surrounding a nanoparticle, introduces additional complexity to the design process and requires computationally expensive simulations. As a result, previous attempts at designing polymer nanocomposites have focused on finding the optimal microstructure for only a fixed combination of constituents. In this article, we propose a data centric design framework to concurrently identify optimal composition and microstructure using mixed-variable Bayesian optimization. This framework integrates experimental data with state-of-the-art techniques in interphase modeling, microstructure characterization and reconstructions and machine learning. Latent variable Gaussian processes (LVGPs) quantifies the lack-of-data uncertainty over the mixed-variable design space that consists of qualitative and quantitative material design variables. The design of electrically insulating nanocomposites is cast as a multicriteria optimization problem with the goal of maximizing dielectric breakdown strength while minimizing dielectric permittivity and dielectric loss. Within tens of simulations, our method identifies a diverse set of designs on the Pareto frontier indicating the tradeoff between dielectric properties. These findings project data centric design, effectively integrating experimental data with simulations for Bayesian Optimization, as an effective approach for design of engineered material systems. 
    more » « less
  4. Flexible thermoelectric generators (TEGs) have shown immense potential for serving as a power source for wearable electronics and the Internet of Things. A key challenge preventing large-scale application of TEGs lies in the lack of a high-throughput processing method, which can sinter thermoelectric (TE) materials rapidly while maintaining their high thermoelectric properties. Herein, we integrate high-throughput experimentation and Bayesian optimization (BO) to accelerate the discovery of the optimum sintering conditions of silver–selenide TE films using an ultrafast intense pulsed light (flash) sintering technique. Due to the nature of the high-dimensional optimization problem of flash sintering processes, a Gaussian process regression (GPR) machine learning model is established to rapidly recommend the optimum flash sintering variables based on Bayesian expected improvement. For the first time, an ultrahigh-power factor flexible TE film (a power factor of 2205 μW m −1 K −2 with a zT of 1.1 at 300 K) is demonstrated with a sintering time less than 1.0 second, which is several orders of magnitude shorter than that of conventional thermal sintering techniques. The films also show excellent flexibility with 92% retention of the power factor (PF) after 10 3 bending cycles with a 5 mm bending radius. In addition, a wearable thermoelectric generator based on the flash-sintered films generates a very competitive power density of 0.5 mW cm −2 at a temperature difference of 10 K. This work not only shows the tremendous potential of high-performance and flexible silver–selenide TEGs but also demonstrates a machine learning-assisted flash sintering strategy that could be used for ultrafast, high-throughput and scalable processing of functional materials for a broad range of energy and electronic applications. 
    more » « less
  5. null (Ed.)
    Abstract Scientific and engineering problems often require the use of artificial intelligence to aid understanding and the search for promising designs. While Gaussian processes (GP) stand out as easy-to-use and interpretable learners, they have difficulties in accommodating big data sets, categorical inputs, and multiple responses, which has become a common challenge for a growing number of data-driven design applications. In this paper, we propose a GP model that utilizes latent variables and functions obtained through variational inference to address the aforementioned challenges simultaneously. The method is built upon the latent-variable Gaussian process (LVGP) model where categorical factors are mapped into a continuous latent space to enable GP modeling of mixed-variable data sets. By extending variational inference to LVGP models, the large training data set is replaced by a small set of inducing points to address the scalability issue. Output response vectors are represented by a linear combination of independent latent functions, forming a flexible kernel structure to handle multiple responses that might have distinct behaviors. Comparative studies demonstrate that the proposed method scales well for large data sets with over 104 data points, while outperforming state-of-the-art machine learning methods without requiring much hyperparameter tuning. In addition, an interpretable latent space is obtained to draw insights into the effect of categorical factors, such as those associated with “building blocks” of architectures and element choices in metamaterial and materials design. Our approach is demonstrated for machine learning of ternary oxide materials and topology optimization of a multiscale compliant mechanism with aperiodic microstructures and multiple materials. 
    more » « less