Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2026
-
DeVore, R; Kunoth, A (Ed.)We construct uniformly bounded solutions of the equation div u = f for arbitrary data f in the critical spaces Ld(Ω), where Ω is a domain of Rd. This question was addressed by Bourgain & Brezis, [BB2003], who proved that although the problem has a uniformly bounded solution, it is critical in the sense that there exists no linear solution operator for general Ld-data. We first discuss the validity of this existence result under weaker conditions than f ∈ Ld(Ω), and then focus our work on constructive processes for such uniformly bounded solutions. In the d = 2 case, we present a direct one-step explicit construction, which generalizes for d > 2 to a (d − 1)-step construction based on induction. An explicit construction is proposed for compactly supported data in L2,∞(Ω) in the d = 2 case. We also present constructive approaches based on optimization of a certain loss functional adapted to the problem. This approach provides a two-step construction in the d = 2 case. This optimization is used as the building block of a hierarchical multistep process introduced in [Tad2014] that converges to a solution in more general situations.more » « lessFree, publicly-accessible full text available December 4, 2025
-
Free, publicly-accessible full text available December 4, 2025
-
We consider the problem of numerically approximating the solutions to a partial differential equation (PDE) when there is insufficient information to determine a unique solution. Our main example is the Poisson boundary value problem, when the boundary data is unknown and instead one observes finitely many linear measurements of the solution. We view this setting as an optimal recovery problem and develop theory and numerical algorithms for its solution. The main vehicle employed is the derivation and approximation of the Riesz representers of these functionals with respect to relevant Hilbert spaces of harmonic functions.more » « less
-
We prove Carl’s type inequalities for the error of approximation of compact sets K by deep and shallow neural networks. This in turn gives estimates from below on how well we can approximate the functions in K when requiring the approximants to come from outputs of such networks. Our results are obtained as a byproduct of the study of the recently introduced Lipschitz widths.more » « less
-
This paper studies the problem of learning an unknown function f from given data about f. The learning problem is to give an approximation f^* to f that predicts the values of f away from the data. There are numerous settings for this learning problem depending on (i) what additional information we have about f (known as a model class assumption), (ii) how we measure the accuracy of how well f^* predicts f, (iii) what is known about the data and data sites,(iv) whether the data observations are polluted by noise. A mathematical description of the optimal performance possible (the smallest possible error of recovery) is known in the presence of a model class assumption. Under standard model class assumptions, it is shown in this paper that a near optimal f^* can be found by solving a certain discrete over-parameterized optimization problem with a penalty term. Here, near optimal means that the error is bounded by a fixed constant times the optimal error. This explains the advantage of over-parameterization which is commonly used in modern machine learning. The main results of this paper prove that over-parameterized learning with an appropriate loss function gives a near optimal approximation f^* of the function f from which the data is collected. Quantitative bounds are given for how much over-parameterization needs to be employed and how the penalization needs to be scaled in order to guarantee a near optimal recovery off. An extension of these results to the case where the data is polluted by additive deterministic noise is also given.more » « less
-
We give estimates from below for the error of approximation of a compact subset from a Banach space by the outputs of feed-forward neural networks with width W, depth l and Lipschitz activation functions. We show that, modulo logarithmic factors, rates better that entropy numbers' rates are possibly attainable only for neural networks for which the depth l goes to infinity, and that there is no gain if we fix the depth and let the width W go to infinity.more » « less
An official website of the United States government

Full Text Available