skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Minimum norm interpolation in the ℓ1(ℕ) space
We consider the minimum norm interpolation problem in the [Formula: see text] space, aiming at constructing a sparse interpolation solution. The original problem is reformulated in the pre-dual space, thereby inducing a norm in a related finite-dimensional Euclidean space. The dual problem is then transformed into a linear programming problem, which can be solved by existing methods. With that done, the original interpolation problem is reduced by solving an elementary finite-dimensional linear algebra equation. A specific example is presented to illustrate the proposed method, in which a sparse solution in the [Formula: see text] space is compared to the dense solution in the [Formula: see text] space. This example shows that a solution of the minimum norm interpolation problem in the [Formula: see text] space is indeed sparse, while that of the minimum norm interpolation problem in the [Formula: see text] space is not.  more » « less
Award ID(s):
1912958
PAR ID:
10296298
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Analysis and Applications
Volume:
19
Issue:
01
ISSN:
0219-5305
Page Range / eLocation ID:
21 to 42
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Shawe-Taylor, John (Ed.)
    Learning a function from a finite number of sampled data points (measurements) is a fundamental problem in science and engineering. This is often formulated as a minimum norm interpolation (MNI) problem, a regularized learning problem or, in general, a semi-discrete inverse problem (SDIP), in either Hilbert spaces or Banach spaces. The goal of this paper is to systematically study solutions of these problems in Banach spaces. We aim at obtaining explicit representer theorems for their solutions, on which convenient solution methods can then be developed. For the MNI problem, the explicit representer theorems enable us to express the infimum in terms of the norm of the linear combination of the interpolation functionals. For the purpose of developing efficient computational algorithms, we establish the fixed-point equation formulation of solutions of these problems. We reveal that unlike in a Hilbert space, in general, solutions of these problems in a Banach space may not be able to be reduced to truly finite dimensional problems (with certain infinite dimensional components hidden). We demonstrate how this obstacle can be removed, reducing the original problem to a truly finite dimensional one, in the special case when the Banach space is ℓ1(N). 
    more » « less
  2. For generalized Korteweg–De Vries (KdV) models with polynomial nonlinearity, we establish a local smoothing property in [Formula: see text] for [Formula: see text]. Such smoothing effect persists globally, provided that the [Formula: see text] norm does not blow up in finite time. More specifically, we show that a translate of the nonlinear part of the solution gains [Formula: see text] derivatives for [Formula: see text]. Following a new simple method, which is of independent interest, we establish that, for [Formula: see text], [Formula: see text] norm of a solution grows at most by [Formula: see text] if [Formula: see text] norm is a priori controlled. 
    more » « less
  3. This paper presents a black-box framework for accelerating packing optimization solvers. Our method applies to packing linear programming problems and a family of convex programming problems with linear constraints. The framework is designed for high-dimensional problems, for which the number of variables n is much larger than the number of measurements m. Given an [Formula: see text] problem, we construct a smaller [Formula: see text] problem, whose solution we use to find an approximation to the optimal solution. Our framework can accelerate both exact and approximate solvers. If the solver being accelerated produces an α-approximation, then we produce a [Formula: see text]-approximation of the optimal solution to the original problem. We present worst-case guarantees on run time and empirically demonstrate speedups of two orders of magnitude. 
    more » « less
  4. Sparsity of a learning solution is a desirable feature in machine learning. Certain reproducing kernel Banach spaces (RKBSs) are appropriate hypothesis spaces for sparse learning methods. The goal of this paper is to understand what kind of RKBSs can promote sparsity for learning solutions. We consider two typical learning models in an RKBS: the minimum norm interpolation (MNI) problem and the regularization problem. We first establish an explicit representer theorem for solutions of these problems, which represents the extreme points of the solution set by a linear combination of the extreme points of the subdifferential set, of the norm function, which is data-dependent. We then propose sufficient conditions on the RKBS that can transform the explicit representation of the solutions to a sparse kernel representation having fewer terms than the number of the observed data. Under the proposed sufficient conditions, we investigate the role of the regularization parameter on sparsity of the regularized solutions. We further show that two specific RKBSs, the sequence space l_1(N) and the measure space, can have sparse representer theorems for both MNI and regularization models. 
    more » « less
  5. In this paper, we study kernel ridge-less regression, including the case of interpolating solutions. We prove that maximizing the leave-one-out ([Formula: see text]) stability minimizes the expected error. Further, we also prove that the minimum norm solution — to which gradient algorithms are known to converge — is the most stable solution. More precisely, we show that the minimum norm interpolating solution minimizes a bound on [Formula: see text] stability, which in turn is controlled by the smallest singular value, hence the condition number, of the empirical kernel matrix. These quantities can be characterized in the asymptotic regime where both the dimension ([Formula: see text]) and cardinality ([Formula: see text]) of the data go to infinity (with [Formula: see text] as [Formula: see text]). Our results suggest that the property of [Formula: see text] stability of the learning algorithm with respect to perturbations of the training set may provide a more general framework than the classical theory of Empirical Risk Minimization (ERM). While ERM was developed to deal with the classical regime in which the architecture of the learning network is fixed and [Formula: see text], the modern regime focuses on interpolating regressors and overparameterized models, when both [Formula: see text] and [Formula: see text] go to infinity. Since the stability framework is known to be equivalent to the classical theory in the classical regime, our results here suggest that it may be interesting to extend it beyond kernel regression to other overparameterized algorithms such as deep networks. 
    more » « less