skip to main content


Search for: All records

Creators/Authors contains: "Vlassis, Nikolaos N."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We introduce a denoising diffusion algorithm to discover microstructures with nonlinear fine-tuned properties. Denoising diffusion probabilistic models are generative models that use diffusion-based dynamics to gradually denoise images and generate realistic synthetic samples. By learning the reverse of a Markov diffusion process, we design an artificial intelligence to efficiently manipulate the topology of microstructures to generate a massive number of prototypes that exhibit constitutive responses sufficiently close to designated nonlinear constitutive behaviors. To identify the subset of microcstructures with sufficiently precise fine-tuned properties, a convolutional neural network surrogate is trained to replace high-fidelity finite element simulations to filter out prototypes outside the admissible range. Results of this study indicate that the denoising diffusion process is capable of creating microstructures of fine-tuned nonlinear material properties within the latent space of the training data. More importantly, this denoising diffusion algorithm can be easily extended to incorporate additional topological and geometric modifications by introducing high-dimensional structures embedded in the latent space. Numerical experiments are conducted on the open-source mechanical MNIST data set (Lejeune, 2020). Consequently, this algorithm is not only capable of performing inverse design of nonlinear effective media, but also learns the nonlinear structure–property map to quantitatively understand the multiscale interplay among the geometry, topology, and their effective macroscopic properties. 
    more » « less
    Free, publicly-accessible full text available August 1, 2024
  2. Experimental data are often costly to obtain, which makes it difficult to calibrate complex models. For many models an experimental design that produces the best calibration given a limited experimental budget is not obvious. This paper introduces a deep reinforcement learning (RL) algorithm for design of experiments that maximizes the information gain measured by Kullback–Leibler divergence obtained via the Kalman filter (KF). This combination enables experimental design for rapid online experiments where manual trial-and-error is not feasible in the high-dimensional parametric design space. We formulate possible configurations of experiments as a decision tree and a Markov decision process, where a finite choice of actions is available at each incremental step. Once an action is taken, a variety of measurements are used to update the state of the experiment. This new data leads to a Bayesian update of the parameters by the KF, which is used to enhance the state representation. In contrast to the Nash–Sutcliffe efficiency index, which requires additional sampling to test hypotheses for forward predictions, the KF can lower the cost of experiments by directly estimating the values of new data acquired through additional actions. In this work our applications focus on mechanical testing of materials. Numerical experiments with complex, history-dependent models are used to verify the implementation and benchmark the performance of the RL-designed experiments. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  3. null (Ed.)
  4. Abstract

    We present a machine learning framework to train and validate neural networks to predict the anisotropic elastic response of a monoclinic organic molecular crystal known as ‐HMX in the geometrical nonlinear regime. A filtered molecular dynamic (MD) simulations database is used to train neural networks with a Sobolev norm that uses the stress measure and a reference configuration to deduce the elastic stored free energy functional. To improve the accuracy of the elasticity tangent predictions originating from the learned stored free energy, a transfer learning technique is used to introduce additional tangential constraints from the data while necessary conditions (e.g., strong ellipticity, crystallographic symmetry) for the correctness of the model are either introduced as additional physical constraints or incorporated in the validation tests. Assessment of the neural networks is based on (1) the accuracy with which they reproduce the bottom‐line constitutive responses predicted by MD, (2) the robustness of the models measured by detailed examination of their stability and uniqueness, and (3) the admissibility of the predicted responses with respect to mechanics principles in the finite‐deformation regime. We compare the training efficiency of the neural networks under different Sobolev constraints and assess the accuracy and robustness of the models against MD benchmarks for ‐HMX.

     
    more » « less