skip to main content


Search for: All records

Award ID contains: 1720297

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Rebollo, Tomás C. ; Donat, Rosa ; Higueras, Inmaculada (Ed.)
    The exploration of complex physical or technological processes usually requires exploiting available information from different sources: (i) physical laws often represented as a family of parameter dependent partial differential equations and (ii) data provided by measurement devices or sensors. The amount of sensors is typically limited and data acquisition may be expensive and in some cases even harmful. This article reviews some recent developments for this “small-data” scenario where inversion is strongly aggravated by the typically large parametric dimension- ality. The proposed concepts may be viewed as exploring alternatives to Bayesian inversion in favor of more deterministic accuracy quantification related to the required computational complexity. We discuss optimality criteria which delineate intrinsic information limits, and highlight the role of reduced models for developing efficient computational strategies. In particular, the need to adapt the reduced models—not to a specific (possibly noisy) data set but rather to the sensor system—is a central theme. This, in turn, is facilitated by exploiting geometric perspectives based on proper stable variational formulations of the continuous model. 
    more » « less
  2. Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in Binev et al. ( SIAM J. Math. Anal. 43 (2011) 1457–1472) and DeVore et al. ( Constructive Approximation 37 (2013) 455–466), where it is shown that the reduced basis space V n of dimension n , constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov n -width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite training set obtained through a discretization of the parameter domain. To guarantee a final approximation error ε for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy of order ε . Hence, the size of the training set is the ε covering number for M and this covering number typically behaves like exp( Cε −1/s ) for some C  > 0 when the solution manifold has n -width decay O ( n −s ). Thus, the shear size of the training set prohibits implementation of the algorithm when ε is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in ε −1 . Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions. 
    more » « less
  3. The challenge of mastering computational tasks of enormous size tends to frequently override questioning the quality of the numerical outcome in terms of accuracy. By this we do not mean the accuracy within the discrete setting, which itself may also be far from evident for ill-conditioned problems or when iterative solvers are involved. By accuracy-controlled computation we mean the deviation of the numerical approximation from the exact solution of an underlying continuous problem in a relevant metric, which has been the initiating interest in the first place. Can the accuracy of a numerical result be rigorously certified – a question that is particularly important in the context of uncertainty quantification, when many possible sources of uncertainties inter- act. This is the guiding question throughout this article, which reviews recent developments of low-rank approximation methods for problems in high spatial dimensions. In particular, we highlight the role of adaptivity when dealing with such strongly nonlinear methods that integrate in a natural way issues of discrete and continuous accuracy. 
    more » « less
  4. null (Ed.)
  5. Abstract This paper is concerned with a posteriori error bounds for linear transport equations and related questions of contriving corresponding adaptive solution strategies in the context of Discontinuous Petrov Galerkin schemes. After indicating our motivation for this investigation in a wider context the first major part of the paper is devoted to the derivation and analysis of a posteriori error bounds that, under mild conditions on variable convection fields, are efficient and, modulo a data-oscillation term, reliable. In particular, it is shown that these error estimators are computed at a cost that stays uniformly proportional to the problem size. The remaining part of the paper is then concerned with the question whether typical bulk criteria known from adaptive strategies for elliptic problems entail a fixed error reduction rate also in the context of transport equations. This turns out to be significantly more difficult than for elliptic problems and at this point we can give a complete affirmative answer for a single spatial dimension. For the general multidimensional case we provide partial results which we find of interest in their own right. An essential distinction from known concepts is that global arguments enter the issue of error reduction. An important ingredient of the underlying analysis, which is perhaps interesting in its own right, is to relate the derived error indicators to the residuals that naturally arise in related least squares formulations. This reveals a close interrelation between both settings regarding error reduction in the context of adaptive refinements. 
    more » « less