skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on September 10, 2026

Title: Chance-constrained optimal design of porous thermal insulation systems under spatially correlated uncertainty
This paper introduces a computationally efficient framework for the optimal design of engineering systems governed by multiphysics, nonlinear partial differential equations (PDEs) and subject to high-dimensional spatial uncertainty. The focus is on 3D printed silica aerogel-based thermal break components in building envelopes, where the objective is to maximize thermal insulation performance while ensuring mechanical reliability by mitigating stress concentrations. Material porosity is modeled as a spatially correlated Gaussian random field, yielding a high-dimensional stochastic design space whose dimensionality corresponds to the mesh resolution after finite element discretization. A robust design objective is employed, incorporating statistical moments of the thermal performance metric and in conjunction with a probabilistic (chance) constraint that restricts the p-norm of the von Mises stress field below a critical threshold, effectively controlling stress concentrations across the domain. To alleviate the substantial computational burden associated with Monte Carlo estimation of statistical moments, a second-order Taylor series approximation is introduced as a control variate, significantly accelerating convergence. Furthermore, a continuation-based strategy is developed to regularize the non-differentiable chance constraints, enabling the use of an efficient gradient-based Newton–Conjugate Gradient optimization algorithm. The proposed framework achieves computational scalability that is effectively independent of the stochastic design space dimensionality. Numerical experiments on two- and three-dimensional thermal breaks in building insulation demonstrate the method’s efficacy in solving large-scale, PDE-constrained, chance-constrained optimization problems with uncertain parameter spaces reaching dimensions in the hundreds of thousands.  more » « less
Award ID(s):
2143662
PAR ID:
10659466
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Nature
Date Published:
Journal Name:
Structural and Multidisciplinary Optimization
Volume:
68
Issue:
9
ISSN:
1615-147X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper introduces a scalable computational framework for optimal design under high-dimensional uncertainty, with application to thermal insulation components. The thermal and mechanical behaviors are described by continuum multi-phase models of porous materials governed by partial differential equations (PDEs), and the design parameter, material porosity, is an uncertain and spatially correlated field. After finite element discretization, these factors lead to a high-dimensional PDE-constrained optimization problem. The framework employs a risk-averse formulation that accounts for both the mean and variance of the design objectives. It incorporates two regularization techniques, the L0-norm and phase field functionals, implemented using continuation numerical schemes to promote spatial sparsity in the design parameters. To ensure efficiency, the framework utilizes a second-order Taylor approximation for the mean and variance and exploits the low-rank structure of the preconditioned Hessian of the design objective. This results in computational costs that are determined by the rank of preconditioned Hessian, remaining independent of the number of uncertain parameters. The accuracy, scalability with respect to the parameter dimension, and sparsity-promoting abilities of the framework are assessed through numerical examples involving various building insulation components. 
    more » « less
  2. Newly restructured generalized polynomial chaos expansion (GPCE) methods for high-dimensional design optimization in the presence of input random variables with arbitrary, dependent probability distributions are reported. The methods feature a dimensionally decomposed GPCE (DD-GPCE) for statistical moment and reliability analyses associated with a high-dimensional stochastic response; a novel synthesis between the DD-GPCE approximation and score functions for estimating the first-order design sensitivities of the statistical moments and failure probability; and a standard gradient-based optimization algorithm, constructing the single-step DD-GPCE and multipoint single-step DD-GPCE (MPSS-DD-GPCE) methods. In these new design methods, the multivariate orthonormal basis functions are assembled consistent with the chosen degree of interaction between input variables and the polynomial order, thus facilitating to deflate the curse of dimensionality to the extent possible. In addition, when coupled with score functions, the DD-GPCE approximation leads to analytical formulae for calculating the design sensitivities. More importantly, the statistical moments, failure probability, and their design sensitivities are determined concurrently from a single stochastic analysis or simulation. Numerical results affirm that the proposed methods yield accurate and computationally efficient optimal solutions of mathematical problems and design solutions for simple mechanical systems. Finally, the success in conducting stochastic shape optimization of a bogie side frame with 41 random variables demonstrates the power of the MPSS-DD-GPCE method in solving industrial-scale engineering design problems. 
    more » « less
  3. Bach, Francis; Blei, David; Scholkopf, Bernhard (Ed.)
    This paper investigates the asymptotic behaviors of gradient descent algorithms (particularly accelerated gradient descent and stochastic gradient descent) in the context of stochastic optimization arising in statistics and machine learning, where objective functions are estimated from available data. We show that these algorithms can be computationally modeled by continuous-time ordinary or stochastic differential equations. We establish gradient flow central limit theorems to describe the limiting dynamic behaviors of these computational algorithms and the large-sample performances of the related statistical procedures, as the number of algorithm iterations and data size both go to infinity, where the gradient flow central limit theorems are governed by some linear ordinary or stochastic differential equations, like time-dependent Ornstein-Uhlenbeck processes. We illustrate that our study can provide a novel unified framework for a joint computational and statistical asymptotic analysis, where the computational asymptotic analysis studies the dynamic behaviors of these algorithms with time (or the number of iterations in the algorithms), the statistical asymptotic analysis investigates the large-sample behaviors of the statistical procedures (like estimators and classifiers) that are computed by applying the algorithms; in fact, the statistical procedures are equal to the limits of the random sequences generated from these iterative algorithms, as the number of iterations goes to infinity. The joint analysis results based on the obtained gradient flow central limit theorems lead to the identification of four factors—learning rate, batch size, gradient covariance, and Hessian—to derive new theories regarding the local minima found by stochastic gradient descent for solving non-convex optimization problems. 
    more » « less
  4. As we strive to establish a long-term presence in space, it is crucial to plan large-scale space missions and campaigns with future uncertainties in mind. However, integrated space mission planning, which simultaneously considers mission planning and spacecraft design, faces significant challenges when dealing with uncertainties; this problem is formulated as a stochastic mixed integer nonlinear program (MINLP), and solving it using the conventional method would be computationally prohibitive for realistic applications. Extending a deterministic decomposition method from our previous work, we propose a novel and computationally efficient approach for integrated space mission planning under uncertainty. The proposed method effectively combines the Alternating Direction Method of Multipliers (ADMM)-based decomposition framework from our previous work, robust optimization, and two-stage stochastic programming (TSSP).This hybrid approach first solves the integrated problem deterministically, assuming the worst scenario, to precompute the robust spacecraft design. Subsequently, the two-stage stochastic program is solved for mission planning, effectively transforming the problem into a more manageable mixed-integer linear program (MILP). This approach significantly reduces computational costs compared to the exact method, but may potentially miss solutions that the exact method might find. We examine this balance through a case study of staged infrastructure deployment on the lunar surface under future demand uncertainty. When comparing the proposed method with a fully coupled benchmark, the results indicate that our approach can achieve nearly identical objective values (no worse than 1% in solved problems) while drastically reducing computational costs. 
    more » « less
  5. null (Ed.)
    https://arxiv.org/abs/2007.14539 As in standard linear regression, in truncated linear regression, we are given access to observations (Ai,yi)i whose dependent variable equals yi=ATi⋅x∗+ηi, where x∗ is some fixed unknown vector of interest and ηi is independent noise; except we are only given an observation if its dependent variable yi lies in some "truncation set" S⊂ℝ. The goal is to recover x∗ under some favorable conditions on the Ai's and the noise distribution. We prove that there exists a computationally and statistically efficient method for recovering k-sparse n-dimensional vectors x∗ from m truncated samples, which attains an optimal ℓ2 reconstruction error of O((klogn)/m‾‾‾‾‾‾‾‾‾‾√). As a corollary, our guarantees imply a computationally efficient and information-theoretically optimal algorithm for compressed sensing with truncation, which may arise from measurement saturation effects. Our result follows from a statistical and computational analysis of the Stochastic Gradient Descent (SGD) algorithm for solving a natural adaptation of the LASSO optimization problem that accommodates truncation. This generalizes the works of both: (1) [Daskalakis et al. 2018], where no regularization is needed due to the low-dimensionality of the data, and (2) [Wainright 2009], where the objective function is simple due to the absence of truncation. In order to deal with both truncation and high-dimensionality at the same time, we develop new techniques that not only generalize the existing ones but we believe are of independent interest. 
    more » « less