This paper introduces a computationally efficient framework for the optimal design of engineering systems governed by multiphysics, nonlinear partial differential equations (PDEs) and subject to high-dimensional spatial uncertainty. The focus is on 3D printed silica aerogel-based thermal break components in building envelopes, where the objective is to maximize thermal insulation performance while ensuring mechanical reliability by mitigating stress concentrations. Material porosity is modeled as a spatially correlated Gaussian random field, yielding a high-dimensional stochastic design space whose dimensionality corresponds to the mesh resolution after finite element discretization. A robust design objective is employed, incorporating statistical moments of the thermal performance metric and in conjunction with a probabilistic (chance) constraint that restricts the p-norm of the von Mises stress field below a critical threshold, effectively controlling stress concentrations across the domain. To alleviate the substantial computational burden associated with Monte Carlo estimation of statistical moments, a second-order Taylor series approximation is introduced as a control variate, significantly accelerating convergence. Furthermore, a continuation-based strategy is developed to regularize the non-differentiable chance constraints, enabling the use of an efficient gradient-based Newton–Conjugate Gradient optimization algorithm. The proposed framework achieves computational scalability that is effectively independent of the stochastic design space dimensionality. Numerical experiments on two- and three-dimensional thermal breaks in building insulation demonstrate the method’s efficacy in solving large-scale, PDE-constrained, chance-constrained optimization problems with uncertain parameter spaces reaching dimensions in the hundreds of thousands.
more »
« less
HIGH-DIMENSIONAL STOCHASTIC DESIGN OPTIMIZATION UNDER DEPENDENT RANDOM VARIABLES BY A DIMENSIONALLY DECOMPOSED GENERALIZED POLYNOMIAL CHAOS EXPANSION
Newly restructured generalized polynomial chaos expansion (GPCE) methods for high-dimensional design optimization in the presence of input random variables with arbitrary, dependent probability distributions are reported. The methods feature a dimensionally decomposed GPCE (DD-GPCE) for statistical moment and reliability analyses associated with a high-dimensional stochastic response; a novel synthesis between the DD-GPCE approximation and score functions for estimating the first-order design sensitivities of the statistical moments and failure probability; and a standard gradient-based optimization algorithm, constructing the single-step DD-GPCE and multipoint single-step DD-GPCE (MPSS-DD-GPCE) methods. In these new design methods, the multivariate orthonormal basis functions are assembled consistent with the chosen degree of interaction between input variables and the polynomial order, thus facilitating to deflate the curse of dimensionality to the extent possible. In addition, when coupled with score functions, the DD-GPCE approximation leads to analytical formulae for calculating the design sensitivities. More importantly, the statistical moments, failure probability, and their design sensitivities are determined concurrently from a single stochastic analysis or simulation. Numerical results affirm that the proposed methods yield accurate and computationally efficient optimal solutions of mathematical problems and design solutions for simple mechanical systems. Finally, the success in conducting stochastic shape optimization of a bogie side frame with 41 random variables demonstrates the power of the MPSS-DD-GPCE method in solving industrial-scale engineering design problems.
more »
« less
- Award ID(s):
- 2317172
- PAR ID:
- 10628786
- Publisher / Repository:
- Begell House
- Date Published:
- Journal Name:
- International Journal for Uncertainty Quantification
- Volume:
- 13
- Issue:
- 4
- ISSN:
- 2152-5080
- Page Range / eLocation ID:
- 23 to 59
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We introduce a novel framework called REIN: Reliability Estimation by learning an Importance sampling (IS) distribution with Normalizing flows (NFs). The NFs learn probability space maps that transform the probability distribution of the input random variables into a quasi-optimal IS distribution. NFs stack together invertible neural networks to construct differentiable bijections with efficiently computed Jacobian determinants. The NF 'pushes forward' a realization from the input probability distribution into a realization from the IS distribution, with importance weights calculated using the change of variables formula. We also propose a loss function to learn a NF map that minimizes the reverse Kullback-Leibler divergence between the 'pushforward' distribution and a sequentially updated target distribution obtained by modifying the optimal IS distribution. We demonstrate REIN's efficacy on a set of benchmark problems that feature very low failure rates, multiple failure modes and high dimensionality, while comparing against other variance reduction methods. We also consider two simple applications, the reliability analyses of a thirty-four story building and a cantilever tube, to demonstrate the applicability of REIN to practical problems of interest. As compared to other methods, REIN is shown to be useful for high-dimensional reliability estimation problems with very small failure probabilities.more » « less
-
null (Ed.)We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Such problems arise widely in the theory of random graphs, theoretical computer science, and statistical physics. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising p-spin glass model, and (b) finding a large independent set in a sparse Erdos-Renyi graph. Two families of algorithms are considered: (a) low-degree polynomials of the input-a general framework that captures methods such as approximate message passing and local algorithms on sparse graphs, among others; and (b) the Langevin dynamics algorithm, a canonical Monte Carlo analogue of the gradient descent algorithm (applicable only for the spherical p-spin glass Hamiltonian). We show that neither family of algorithms can produce nearly optimal solutions with high probability. Our proof uses the fact that both models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objective values are above a certain threshold are either close or far from each other. The crux of our proof is the stability of both algorithms: a small perturbation of the input induces a small perturbation of the output. By an interpolation argument, such a stable algorithm cannot overcome the OGP barrier. The stability of the Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials is established using concepts from Gaussian and Boolean Fourier analysis, including noise sensitivity, hypercontractivity, and total influence.more » « less
-
Numerical solutions of stochastic problems require the representation of random functions in their definitions by finite dimensional (FD) models, i.e., deterministic functions of time and finite sets of random variables. It is common to represent the coefficients of these FD surrogates by polynomial chaos (PC) models. We propose a novel model, referred to as the polynomial chaos translation (PCT) model, which matches exactly the marginal distributions of the FD coefficients and approximately their dependence. PC- and PCT- based FD models are constructed for a set of test cases and a wind pressure time series recorded at the boundary layer wind tunnel facility at the University of Florida. The PCT-based models capture the joint distributions of the FD coefficients and the extremes of target times series accurately while PC-based FD models do not have this capability.more » « less
-
Significant advances have been made recently on training neural networks, where the main challenge is in solving an optimization problem with abundant critical points. However, existing approaches to address this issue crucially rely on a restrictive assumption: the training data is drawn from a Gaussian distribution. In this paper, we provide a novel unified framework to design loss functions with desirable landscape properties for a wide range of general input distributions. On these loss functions, remarkably, stochastic gradient descent theoretically recovers the true parameters with global initializations and empirically outperforms the existing approaches. Our loss function design bridges the notion of score functions with the topic of neural network optimization. Central to our approach is the task of estimating the score function from samples, which is of basic and independent interest to theoretical statistics. Traditional estimation methods (example: kernel based) fail right at the outset; we bring statistical methods of local likelihood to design a novel estimator of score functions, that provably adapts to the local geometry of the unknown density.more » « less
An official website of the United States government

