Predicting the process of porosity-based ductile damage in polycrystalline metallic materials is an essential practical topic. Ductile damage and its precursors are represented by extreme values in stress and material state quantities, the spatial probability density function (PDF) of which are highly non-Gaussian with strong fat tails. Traditional deterministic forecasts utilizing sophisticated continuum-based physical models generally lack in representing the statistics of structural evolution during material deformation. Computational tools which do represent complex structural evolution are typically expensive. The inevitable model error and the lack of uncertainty quantification may also induce significant forecast biases, especially in predicting the extreme events associated with ductile damage. In this paper, a data-driven statistical reduced-order modeling framework is developed to provide a probabilistic forecast of the deformation process of a polycrystal aggregate leading to porosity-based ductile damage with uncertainty quantification. The framework starts with computing the time evolution of the leading few moments of specific state variables from the spatiotemporal solution of full- field polycrystal simulations. Then a sparse model identification algorithm based on causation entropy, including essential physical constraints, is utilized to discover the governing equations of these moments. An approximate solution of the time evolution of the PDF is obtained from the predicted moments exploiting the maximum entropy principle. Numerical experiments based on polycrystal realizations of a representative body-centered cubic (BCC) tantalum illustrate a skillful reduced-order model in characterizing the time evolution of the non-Gaussian PDF of the von Mises stress and quantifying the probability of extreme events. The learning process also reveals that the mean stress is not simply an additive forcing to drive the higher-order moments and extreme events. Instead, it interacts with the latter in a strongly nonlinear and multiplicative fashion. In addition, the calibrated moment equations provide a reasonably accurate forecast when applied to the realizations outside the training data set, indicating the robustness of the model and the skill for extrapolation. Finally, an information-based measurement is employed to quantitatively justify that the leading four moments are sufficient to characterize the crucial highly non-Gaussian features throughout the entire deformation history considered.
more »
« less
Minimum reduced-order models via causal inference
Abstract Constructing sparse, effective reduced-order models (ROMs) for high-dimensional dynamical data is an active area of research in applied sciences. In this work, we study an efficient approach to identifying such sparse ROMs using an information-theoretic indicator called causation entropy. Given a feature library of possible building block terms for the sought ROMs, the causation entropy ranks the importance of each term to the dynamics conveyed by the training data before a parameter estimation procedure is performed. It thus allows for an efficient construction of a hierarchy of ROMs with varying degrees of sparsity to effectively handle different tasks. This article examines the ability of the causation entropy to identify skillful sparse ROMs when a relatively high-dimensional ROM is required to emulate the dynamics conveyed by the training dataset. We demonstrate that a Gaussian approximation of the causation entropy still performs exceptionally well even in presence of highly non-Gaussian statistics. Such approximations provide an efficient way to access the otherwise hard to compute causation entropies when the selected feature library contains a large number of candidate functions. Besides recovering long-term statistics, we also demonstrate good performance of the obtained ROMs in recovering unobserved dynamics via data assimilation with partial observations, a test that has not been done before for causation-based ROMs of partial differential equations. The paradigmatic Kuramoto–Sivashinsky equation placed in a chaotic regime with highly skewed, multimodal statistics is utilized for these purposes.
more »
« less
- PAR ID:
- 10562681
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Nonlinear Dynamics
- Volume:
- 113
- Issue:
- 10
- ISSN:
- 0924-090X
- Format(s):
- Medium: X Size: p. 11327-11351
- Size(s):
- p. 11327-11351
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Computationally efficient modeling of gas turbine combustion is challenging due to the chaotic multi-scale physics and the complex non-linear interactions between acoustic, hydrodynamic, and chemical processes. A large-eddy simulation (LES) is conducted for the model combustor of Meier et al. (1) using an unstructured mesh finite volume method with turbulent combustion effects modeled using a flamelet-based method. The flow field is validated via comparison to averaged and unsteady high-frequency particle image velocimetry (PIV) fields. A high degree of correlation is noted with the experiment in terms of flow field snapshots and via modal analysis. The dynamics of the precessing vortex core (PVC) is quantitatively characterized using dynamic mode decomposition. The validated FOM dataset is used to construct projection-based ROMs, which aim to reduce the system dimension by projecting the state onto a reduced dimensional linear manifold. The use of a structure-preserving least squares formulation (SP-LSVT) guarantees stability of the ROM, compared to traditional model reduction techniques. The SP-LSVT ROM provides accurate reconstruction of the combustion dynamics within the training region, but faces a significant challenge in future state predictions. This limitation is mainly due to the increased projection error, which in turn is a direct consequence of the highly chaotic nature of the flow field, involving a wide range of disperse coherent structures. Formal projection-based ROMs have not been applied to a problem of this scale and complexity, and achieving accurate and efficient ROMs is a grand challenge problem. Further advances in non-linear manifold projections or adaptive basis projections have the potential to improve the predictive capability of this class of ROMs.more » « less
-
null (Ed.)Abstract Recurrent neural networks have led to breakthroughs in natural language processing and speech recognition. Here we show that recurrent networks, specifically long short-term memory networks can also capture the temporal evolution of chemical/biophysical trajectories. Our character-level language model learns a probabilistic model of 1-dimensional stochastic trajectories generated from higher-dimensional dynamics. The model captures Boltzmann statistics and also reproduces kinetics across a spectrum of timescales. We demonstrate how training the long short-term memory network is equivalent to learning a path entropy, and that its embedding layer, instead of representing contextual meaning of characters, here exhibits a nontrivial connectivity between different metastable states in the underlying physical system. We demonstrate our model’s reliability through different benchmark systems and a force spectroscopy trajectory for multi-state riboswitch. We anticipate that our work represents a stepping stone in the understanding and use of recurrent neural networks for understanding the dynamics of complex stochastic molecular systems.more » « less
-
High-dimensional data is commonly encountered in various applications, including genomics, as well as image and video processing. Analyzing, computing, and visualizing such data pose significant challenges. Feature extraction methods are crucial in addressing these challenges by obtaining compressed representations that are suitable for analysis and downstream tasks. One effective technique along these lines is sparse coding, which involves representing data as a sparse linear combination of a set of exemplars. In this study, we propose a local sparse coding framework within the context of a classification problem. The objective is to predict the label of a given data point based on labeled training data. The primary optimization problem encourages the representation of each data point using nearby exemplars. We leverage the optimized sparse representation coefficients to predict the label of a test data point by assessing its similarity to the sparse representations of the training data. The proposed framework is computationally efficient and provides interpretable sparse representations. To illustrate the practicality of our proposed framework, we apply it to agriculture for the classification of crop diseases.more » « less
-
Abstract Robust principal component analysis (RPCA) is a widely used method for recovering low‐rank structure from data matrices corrupted by significant and sparse outliers. These corruptions may arise from occlusions, malicious tampering, or other causes for anomalies, and the joint identification of such corruptions with low‐rank background is critical for process monitoring and diagnosis. However, existing RPCA methods and their extensions largely do not account for the underlying probabilistic distribution for the data matrices, which in many applications are known and can be highly non‐Gaussian. We thus propose a new method called RPCA for exponential family distributions (), which can perform the desired decomposition into low‐rank and sparse matrices when such a distribution falls within the exponential family. We present a novel alternating direction method of multiplier optimization algorithm for efficient decomposition, under either its natural or canonical parametrization. The effectiveness of is then demonstrated in two applications: the first for steel sheet defect detection and the second for crime activity monitoring in the Atlanta metropolitan area.more » « less
An official website of the United States government
