skip to main content


Title: Component-based machine learning paradigm for discovering rate-dependent and pressure-sensitive level-set plasticity models
Conventionally, neural network constitutive laws for path-dependent elasto-plastic solids are trained via supervised learning performed on recurrent neural network, with the time history of strain as input and the stress as input. However, training neural network to replicate path-dependent constitutive responses require significant more amount of data due to the path dependence. This demand on diverse and abundance of accurate data, as well as the lack of interpretability to guide the data generation process, could become major roadblocks for engineering applications. In this work, we attempt to simplify these training processes and improve the interpretability of the trained models by breaking down the training of material models into multiple supervised machine learning programs for elasticity, initial yielding and hardening laws that can be conducted sequentially. To predict pressure-sensitivity and rate dependence of the plastic responses, we reformulate the Hamliton-Jacobi equation such that the yield function is parametrized in the product space spanned by the principle stress, the accumulated plastic strain and time. To test the versatility of the neural network meta-modeling framework, we conduct multiple numerical experiments where neural networks are trained and validated against (1) data generated from known benchmark models, (2) data obtained from physical experiments and (3) data inferred from homogenizing sub-scale direct numerical simulations of microstructures. The neural network model is also incorporated into an offline FFT-FEM model to improve the efficiency of the multiscale calculations.  more » « less
Award ID(s):
1846875 1940203
NSF-PAR ID:
10302050
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Applied Mechanics
ISSN:
0021-8936
Page Range / eLocation ID:
1 to 13
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Supervised machine learning via artificial neural network (ANN) has gained significant popularity for many geomechanics applications that involves multi‐phase flow and poromechanics. For unsaturated poromechanics problems, the multi‐physics nature and the complexity of the hydraulic laws make it difficult to design the optimal setup, architecture, and hyper‐parameters of the deep neural networks. This paper presents a meta‐modeling approach that utilizes deep reinforcement learning (DRL) to automatically discover optimal neural network settings that maximize a pre‐defined performance metric for the machine learning constitutive laws. This meta‐modeling framework is cast as a Markov Decision Process (MDP) with well‐defined states (subsets of states representing the proposed neural network (NN) settings), actions, and rewards. Following the selection rules, the artificial intelligence (AI) agent, represented in DRL via NN, self‐learns from taking a sequence of actions and receiving feedback signals (rewards) within the selection environment. By utilizing the Monte Carlo Tree Search (MCTS) to update the policy/value networks, the AI agent replaces the human modeler to handle the otherwise time‐consuming trial‐and‐error process that leads to the optimized choices of setup from a high‐dimensional parametric space. This approach is applied to generate two key constitutive laws for the unsaturated poromechanics problems: (1) the path‐dependent retention curve with distinctive wetting and drying paths. (2) The flow in the micropores, governed by an anisotropic permeability tensor. Numerical experiments have shown that the resultant ML‐generated material models can be integrated into a finite element (FE) solver to solve initial‐boundary‐value problems as replacements of the hand‐craft constitutive laws.

     
    more » « less
  2. Plasticity theory aims at describing the yield loci and work hardening of a material under general deformation states. Most of its complexity arises from the nontrivial dependence of the yield loci on the complete strain history of a material and its microstructure. This motivated 3 ingenious simplifications that underpinned a century of developments in this field: 1) yield criteria describing yield loci location; 2) associative or nonassociative flow rules defining the direction of plastic flow; and 3) effective stress–strain laws consistent with the plastic work equivalence principle. However, 2 key complications arise from these simplifications. First, finding equations that describe these 3 assumptions for materials with complex microstructures is not trivial. Second, yield surface evolution needs to be traced iteratively, i.e., through a return mapping algorithm. Here, we show that these assumptions are not needed in the context of sequence learning when using recurrent neural networks, diverting the above-mentioned complications. This work offers an alternative to currently established plasticity formulations by providing the foundations for finding history- and microstructure-dependent constitutive models through deep learning. 
    more » « less
  3. - (Ed.)
    Shape sensing is an emerging technique for the reconstruction of deformed shapes using data from a discrete network of strain sensors. The prominence is due to its suitability in promising applications such as structural health monitoring in multiple engineering fields and shape capturing in the medical field. In this work, a physics-informed deep learning model, named SenseNet, was developed for shape sensing applications. Unlike existing neural network approaches for shape sensing, SenseNet incorporates the knowledge of the physics of the problem, so its performance does not rely on the choices of the training data. Compared with numerical physics-based approaches, SenseNet is a mesh-free method, and therefore it offers convenience to problems with complex geometries. SenseNet is composed of two parts: a neural network to predict displacements at the given input coordinates, and a physics part to compute the loss using a function incorporated with physics information. The prior knowledge considered in the loss function includes the boundary conditions and physics relations such as the strain–displacement relation, material constitutive equation, and the governing equation obtained from the law of balance of linear momentum.SenseNet was validated with finite-element solutions for cases with nonlinear displacement fields and stress fields using bending and fixed tension tests, respectively, in both two and three dimensions. A study of the sensor density effects illustrated the fact that the accuracy of the model can be improved using a larger amount of strain data. Because general three dimensional governing equations are incorporated in the model, it was found that SenseNet is capable of reconstructing deformations in volumes with reasonable accuracy using just the surface strain data. Hence, unlike most existing models, SenseNet is not specialized for certain types of elements, and can be extended universally for even thick-body applications. 
    more » « less
  4. The development of data-informed predictive models for dynamical systems is of widespread interest in many disciplines. We present a unifying framework for blending mechanistic and machine-learning approaches to identify dynamical systems from noisily and partially observed data. We compare pure data-driven learning with hybrid models which incorporate imperfect domain knowledge, referring to the discrepancy between an assumed truth model and the imperfect mechanistic model as model error. Our formulation is agnostic to the chosen machine learning model, is presented in both continuous- and discrete-time settings, and is compatible both with model errors that exhibit substantial memory and errors that are memoryless. First, we study memoryless linear (w.r.t. parametric-dependence) model error from a learning theory perspective, defining excess risk and generalization error. For ergodic continuous-time systems, we prove that both excess risk and generalization error are bounded above by terms that diminish with the square-root of T T , the time-interval over which training data is specified. Secondly, we study scenarios that benefit from modeling with memory, proving universal approximation theorems for two classes of continuous-time recurrent neural networks (RNNs): both can learn memory-dependent model error, assuming that it is governed by a finite-dimensional hidden variable and that, together, the observed and hidden variables form a continuous-time Markovian system. In addition, we connect one class of RNNs to reservoir computing, thereby relating learning of memory-dependent error to recent work on supervised learning between Banach spaces using random features. Numerical results are presented (Lorenz ’63, Lorenz ’96 Multiscale systems) to compare purely data-driven and hybrid approaches, finding hybrid methods less datahungry and more parametrically efficient. We also find that, while a continuous-time framing allows for robustness to irregular sampling and desirable domain- interpretability, a discrete-time framing can provide similar or better predictive performance, especially when data are undersampled and the vector field defining the true dynamics cannot be identified. Finally, we demonstrate numerically how data assimilation can be leveraged to learn hidden dynamics from noisy, partially-observed data, and illustrate challenges in representing memory by this approach, and in the training of such models. 
    more » « less
  5. Abstract

    The deep operator network (DeepONet) structure has shown great potential in approximating complex solution operators with low generalization errors. Recently, a sequential DeepONet (S-DeepONet) was proposed to use sequential learning models in the branch of DeepONet to predict final solutions given time-dependent inputs. In the current work, the S-DeepONet architecture is extended by modifying the information combination mechanism between the branch and trunk networks to simultaneously predict vector solutions with multiple components at multiple time steps of the evolution history, which is the first in the literature using DeepONets. Two example problems, one on transient fluid flow and the other on path-dependent plastic loading, were shown to demonstrate the capabilities of the model to handle different physics problems. The use of a trained S-DeepONet model in inverse parameter identification via the genetic algorithm is shown to demonstrate the application of the model. In almost all cases, the trained model achieved an$$R^2$$R2value of above 0.99 and a relative$$L_2$$L2error of less than 10% with only 3200 training data points, indicating superior accuracy. The vector S-DeepONet model, having only 0.4% more parameters than a scalar model, can predict two output components simultaneously at an accuracy similar to the two independently trained scalar models with a 20.8% faster training time. The S-DeepONet inference is at least three orders of magnitude faster than direct numerical simulations, and inverse parameter identifications using the trained model are highly efficient and accurate.

     
    more » « less