skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, July 12 until 2:00 AM ET on Saturday, July 13 due to maintenance. We apologize for the inconvenience.


Title: Nonlinear methods for model reduction
Typical model reduction methods for parametric partial differential equations construct a linear space V n which approximates well the solution manifold M consisting of all solutions u ( y ) with y the vector of parameters. In many problems of numerical computation, nonlinear methods such as adaptive approximation, n -term approximation, and certain tree-based methods may provide improved numerical efficiency over linear methods. Nonlinear model reduction methods replace the linear space V n by a nonlinear space Σ n . Little is known in terms of their performance guarantees, and most existing numerical experiments use a parameter dimension of at most two. In this work, we make a step towards a more cohesive theory for nonlinear model reduction. Framing these methods in the general setting of library approximation, we give a first comparison of their performance with the performance of standard linear approximation for any compact set. We then study these methods for solution manifolds of parametrized elliptic PDEs. We study a specific example of library approximation where the parameter domain is split into a finite number N of rectangular cells, with affine spaces of dimension m assigned to each cell, and give performance guarantees with respect to accuracy of approximation versus m and N .  more » « less
Award ID(s):
1817691
NSF-PAR ID:
10279446
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
ESAIM: Mathematical Modelling and Numerical Analysis
Volume:
55
Issue:
2
ISSN:
0764-583X
Page Range / eLocation ID:
507 to 531
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Model reduction methods usually focus on the error performance analysis; however, in presence of uncertainties, it is important to analyze the robustness properties of the error in model reduction as well. This problem is particularly relevant for engineered biological systems that need to function in a largely unknown and uncertain environment. We give robustness guarantees for structured model reduction of linear and nonlinear dynamical systems under parametric uncertainties. We consider a model reduction problem where the states in the reduced model are a strict subset of the states of the full model, and the dynamics for all of the other states are collapsed to zero (similar to quasi‐steady‐state approximation). We show two approaches to compute a robustness guarantee metric for any such model reduction—a direct linear analysis method for linear dynamics and a sensitivity analysis based approach that also works for nonlinear dynamics. Using the robustness guarantees with an error metric and an input‐output mapping metric, we propose an automated model reduction method to determine the best possible reduced model for a given detailed system model. We apply our method for the (1) design space exploration of a gene expression system that leads to a new mathematical model that accounts for the limited resources in the system and (2) model reduction of a population control circuit in bacterial cells.

     
    more » « less
  2. Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in Binev et al. ( SIAM J. Math. Anal. 43 (2011) 1457–1472) and DeVore et al. ( Constructive Approximation 37 (2013) 455–466), where it is shown that the reduced basis space V n of dimension n , constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov n -width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite training set obtained through a discretization of the parameter domain. To guarantee a final approximation error ε for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy of order ε . Hence, the size of the training set is the ε covering number for M and this covering number typically behaves like exp( Cε −1/s ) for some C  > 0 when the solution manifold has n -width decay O ( n −s ). Thus, the shear size of the training set prohibits implementation of the algorithm when ε is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in ε −1 . Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions. 
    more » « less
  3. Sufficient dimension reduction is popular for reducing data dimensionality without stringent model assumptions. However, most existing methods may work poorly for binary classification. For example, sliced inverse regression (Li, 1991) can estimate at most one direction if the response is binary. In this paper we propose principal weighted support vector machines, a unified framework for linear and nonlinear sufficient dimension reduction in binary classification. Its asymptotic properties are studied, and an efficient computing algorithm is proposed. Numerical examples demonstrate its performance in binary classification. 
    more » « less
  4. Abstract The replacement of a nonlinear parameter-to-observable mapping with a linear (affine) approximation is often carried out to reduce the computational costs associated with solving large-scale inverse problems governed by partial differential equations (PDEs). In the case of a linear parameter-to-observable mapping with normally distributed additive noise and a Gaussian prior measure on the parameters, the posterior is Gaussian. However, substituting an accurate model for a (possibly well justified) linear surrogate model can give misleading results if the induced model approximation error is not accounted for. To account for the errors, the Bayesian approximation error (BAE) approach can be utilised, in which the first and second order statistics of the errors are computed via sampling. The most common linear approximation is carried out via linear Taylor expansion, which requires the computation of (Fréchet) derivatives of the parameter-to-observable mapping with respect to the parameters of interest. In this paper, we prove that the (approximate) posterior measure obtained by replacing the nonlinear parameter-to-observable mapping with a linear approximation is in fact independent of the choice of the linear approximation when the BAE approach is employed. Thus, somewhat non-intuitively, employing the zero-model as the linear approximation gives the same approximate posterior as any other choice of linear approximations of the parameter-to-observable model. The independence of the linear approximation is demonstrated mathematically and illustrated with two numerical PDE-based problems: an inverse scattering type problem and an inverse conductivity type problem. 
    more » « less
  5. Summary Envelopes have been proposed in recent years as a nascent methodology for sufficient dimension reduction and efficient parameter estimation in multivariate linear models. We extend the classical definition of envelopes in Cook et al. (2010) to incorporate a nonlinear conditional mean function and a heteroscedastic error. Given any two random vectors ${X}\in\mathbb{R}^{p}$ and ${Y}\in\mathbb{R}^{r}$, we propose two new model-free envelopes, called the martingale difference divergence envelope and the central mean envelope, and study their relationships to the standard envelope in the context of response reduction in multivariate linear models. The martingale difference divergence envelope effectively captures the nonlinearity in the conditional mean without imposing any parametric structure or requiring any tuning in estimation. Heteroscedasticity, or nonconstant conditional covariance of ${Y}\mid{X}$, is further detected by the central mean envelope based on a slicing scheme for the data. We reveal the nested structure of different envelopes: (i) the central mean envelope contains the martingale difference divergence envelope, with equality when ${Y}\mid{X}$ has a constant conditional covariance; and (ii) the martingale difference divergence envelope contains the standard envelope, with equality when ${Y}\mid{X}$ has a linear conditional mean. We develop an estimation procedure that first obtains the martingale difference divergence envelope and then estimates the additional envelope components in the central mean envelope. We establish consistency in envelope estimation of the martingale difference divergence envelope and central mean envelope without stringent model assumptions. Simulations and real-data analysis demonstrate the advantages of the martingale difference divergence envelope and the central mean envelope over the standard envelope in dimension reduction. 
    more » « less