skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Smoothing and growth bound of periodic generalized Korteweg–De Vries equation
For generalized Korteweg–De Vries (KdV) models with polynomial nonlinearity, we establish a local smoothing property in [Formula: see text] for [Formula: see text]. Such smoothing effect persists globally, provided that the [Formula: see text] norm does not blow up in finite time. More specifically, we show that a translate of the nonlinear part of the solution gains [Formula: see text] derivatives for [Formula: see text]. Following a new simple method, which is of independent interest, we establish that, for [Formula: see text], [Formula: see text] norm of a solution grows at most by [Formula: see text] if [Formula: see text] norm is a priori controlled.  more » « less
Award ID(s):
1908626
PAR ID:
10345205
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Hyperbolic Differential Equations
Volume:
18
Issue:
04
ISSN:
0219-8916
Page Range / eLocation ID:
899 to 930
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We consider the minimum norm interpolation problem in the [Formula: see text] space, aiming at constructing a sparse interpolation solution. The original problem is reformulated in the pre-dual space, thereby inducing a norm in a related finite-dimensional Euclidean space. The dual problem is then transformed into a linear programming problem, which can be solved by existing methods. With that done, the original interpolation problem is reduced by solving an elementary finite-dimensional linear algebra equation. A specific example is presented to illustrate the proposed method, in which a sparse solution in the [Formula: see text] space is compared to the dense solution in the [Formula: see text] space. This example shows that a solution of the minimum norm interpolation problem in the [Formula: see text] space is indeed sparse, while that of the minimum norm interpolation problem in the [Formula: see text] space is not. 
    more » « less
  2. In this paper, we study kernel ridge-less regression, including the case of interpolating solutions. We prove that maximizing the leave-one-out ([Formula: see text]) stability minimizes the expected error. Further, we also prove that the minimum norm solution — to which gradient algorithms are known to converge — is the most stable solution. More precisely, we show that the minimum norm interpolating solution minimizes a bound on [Formula: see text] stability, which in turn is controlled by the smallest singular value, hence the condition number, of the empirical kernel matrix. These quantities can be characterized in the asymptotic regime where both the dimension ([Formula: see text]) and cardinality ([Formula: see text]) of the data go to infinity (with [Formula: see text] as [Formula: see text]). Our results suggest that the property of [Formula: see text] stability of the learning algorithm with respect to perturbations of the training set may provide a more general framework than the classical theory of Empirical Risk Minimization (ERM). While ERM was developed to deal with the classical regime in which the architecture of the learning network is fixed and [Formula: see text], the modern regime focuses on interpolating regressors and overparameterized models, when both [Formula: see text] and [Formula: see text] go to infinity. Since the stability framework is known to be equivalent to the classical theory in the classical regime, our results here suggest that it may be interesting to extend it beyond kernel regression to other overparameterized algorithms such as deep networks. 
    more » « less
  3. null (Ed.)
    Let [Formula: see text] be a convex function satisfying [Formula: see text], [Formula: see text] for [Formula: see text], and [Formula: see text]. Consider the unique entropy admissible (i.e. Kružkov) solution [Formula: see text] of the scalar, 1-d Cauchy problem [Formula: see text], [Formula: see text]. For compactly supported data [Formula: see text] with bounded [Formula: see text]-variation, we realize the solution [Formula: see text] as a limit of front-tracking approximations and show that the [Formula: see text]-variation of (the right continuous version of) [Formula: see text] is non-increasing in time. We also establish the natural time-continuity estimate [Formula: see text] for [Formula: see text], where [Formula: see text] depends on [Formula: see text]. Finally, according to a theorem of Goffman–Moran–Waterman, any regulated function of compact support has bounded [Formula: see text]-variation for some [Formula: see text]. As a corollary we thus have: if [Formula: see text] is a regulated function, so is [Formula: see text] for all [Formula: see text]. 
    more » « less
  4. This work aims to prove a Hardy-type inequality and a trace theorem for a class of function spaces on smooth domains with a nonlocal character. Functions in these spaces are allowed to be as rough as an [Formula: see text]-function inside the domain of definition but as smooth as a [Formula: see text]-function near the boundary. This feature is captured by a norm that is characterized by a nonlocal interaction kernel defined heterogeneously with a special localization feature on the boundary. Thus, the trace theorem we obtain here can be viewed as an improvement and refinement of the classical trace theorem for fractional Sobolev spaces [Formula: see text]. Similarly, the Hardy-type inequalities we establish for functions that vanish on the boundary show that functions in this generalized space have the same decay rate to the boundary as functions in the smaller space [Formula: see text]. The results we prove extend existing results shown in the Hilbert space setting with p = 2. A Poincaré-type inequality we establish for the function space under consideration together with the new trace theorem allows formulating and proving well-posedness of a nonlinear nonlocal variational problem with conventional local boundary condition. 
    more » « less
  5. In this paper, we study the generalized subdifferentials and the Riemannian gradient subconsistency that are the basis for non-Lipschitz optimization on embedded submanifolds of [Formula: see text]. We then propose a Riemannian smoothing steepest descent method for non-Lipschitz optimization on complete embedded submanifolds of [Formula: see text]. We prove that any accumulation point of the sequence generated by the Riemannian smoothing steepest descent method is a stationary point associated with the smoothing function employed in the method, which is necessary for the local optimality of the original non-Lipschitz problem. We also prove that any accumulation point of the sequence generated by our method that satisfies the Riemannian gradient subconsistency is a limiting stationary point of the original non-Lipschitz problem. Numerical experiments are conducted to demonstrate the advantages of Riemannian [Formula: see text] [Formula: see text] optimization over Riemannian [Formula: see text] optimization for finding sparse solutions and the effectiveness of the proposed method. Funding: C. Zhang was supported in part by the National Natural Science Foundation of China [Grant 12171027] and the Natural Science Foundation of Beijing [Grant 1202021]. X. Chen was supported in part by the Hong Kong Research Council [Grant PolyU15300219]. S. Ma was supported in part by the National Science Foundation [Grants DMS-2243650 and CCF-2308597], the UC Davis Center for Data Science and Artificial Intelligence Research Innovative Data Science Seed Funding Program, and a startup fund from Rice University. 
    more » « less