skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On the linear convergence of additive Schwarz methods for the p -Laplacian
Abstract We consider additive Schwarz methods for boundary value problems involving the $$p$$-Laplacian. While existing theoretical estimates suggest a sublinear convergence rate for these methods, empirical evidence from numerical experiments demonstrates a linear convergence rate. In this paper we narrow the gap between these theoretical and empirical results by presenting a novel convergence analysis. First, we present a new convergence theory for additive Schwarz methods written in terms of a quasi-norm. This quasi-norm exhibits behaviour akin to the Bregman distance of the convex energy functional associated with the problem. Secondly, we provide a quasi-norm version of the Poincaré–Friedrichs inequality, which plays a crucial role in deriving a quasi-norm stable decomposition for a two-level domain decomposition setting. By utilizing these key elements we establish the asymptotic linear convergence of additive Schwarz methods for the $$p$$-Laplacian.  more » « less
Award ID(s):
2208499
PAR ID:
10592941
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
IMA Journal of Numerical Analysis
ISSN:
0272-4979
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In this paper, we introduce a quasi-Newton method optimized for efficiently solving quasi-linear elliptic equations and systems, with a specific focus on GPU-based computation. By approximating the Jacobian matrix with a combination of linear Laplacian and simplified nonlinear terms, our method reduces the computational overhead typical of traditional Newton methods while handling the large, sparse matrices generated from discretized PDEs. We also provide a convergence analysis demonstrating local convergence to the exact solution under optimal choices for the regularization parameter, ensuring stability and efficiency in each iteration. Numerical experiments in two- and three-dimensional domains validate the proposed method’s robustness and computational gains with tensor product implementation. This approach offers a promising pathway for accelerating quasi-linear elliptic equations and systems solvers, expanding the feasibility of complex simulations in physics, engineering, and other fields leveraging advanced hardware capabilities. 
    more » « less
  2. Abstract Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis and can be computed efficiently by Sinkhorn–Knopp (SK) iterations. This paper proves the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates, when $$n$$ data points are i.i.d. sampled from a general $$d$$-dimensional manifold embedded in a possibly high-dimensional space. Under certain joint limit of $$n \to \infty $$ and kernel bandwidth $$\epsilon \to 0$$, the point-wise convergence rate of the graph Laplacian operator (under 2-norm) is proved to be $$ O( n^{-1/(d/2+3)})$$ at finite large $$n$$ up to log factors, achieved at the scaling of $$\epsilon \sim n^{-1/(d/2+3)} $$. When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency which matches the rate for clean manifold data plus an additional term proportional to the boundedness of the inner-products of the noise vectors among themselves and with data vectors. Motivated by our analysis, which suggests that not exact bi-stochastic normalization but an approximate one will achieve the same consistency rate, we propose an approximate and constrained matrix scaling problem that can be solved by SK iterations with early termination. Numerical experiments support our theoretical results and show the robustness of bi-stochastically normalized graph Laplacian to high-dimensional outlier noise. 
    more » « less
  3. Abstract In this paper, we study and prove the non-asymptotic superlinear convergence rate of the Broyden class of quasi-Newton algorithms which includes the Davidon–Fletcher–Powell (DFP) method and the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. The asymptotic superlinear convergence rate of these quasi-Newton methods has been extensively studied in the literature, but their explicit finite–time local convergence rate is not fully investigated. In this paper, we provide a finite–time (non-asymptotic) convergence analysis for Broyden quasi-Newton algorithms under the assumptions that the objective function is strongly convex, its gradient is Lipschitz continuous, and its Hessian is Lipschitz continuous at the optimal solution. We show that in a local neighborhood of the optimal solution, the iterates generated by both DFP and BFGS converge to the optimal solution at a superlinear rate of$$(1/k)^{k/2}$$ ( 1 / k ) k / 2 , wherekis the number of iterations. We also prove a similar local superlinear convergence result holds for the case that the objective function is self-concordant. Numerical experiments on several datasets confirm our explicit convergence rate bounds. Our theoretical guarantee is one of the first results that provide a non-asymptotic superlinear convergence rate for quasi-Newton methods. 
    more » « less
  4. Tâtonnement is a simple, intuitive market process where prices are iteratively adjusted based on the difference between demand and supply. Many variants under different market assumptions have been studied and shown to converge to a market equilibrium, in some cases at a fast rate. However, the classical case of linear Fisher markets have long eluded the analyses, and it remains unclear whether tâtonnement converges in this case. We show that, for a sufficiently small stepsize, the prices given by the tâtonnement process are guaranteed to converge to equilibrium prices, up to a small approximation radius that depends on the stepsize. To achieve this, we consider the dual Eisenberg-Gale convex program in the price space, view tâtonnement as subgradient descent on this convex program, and utilize novel last-iterate convergence results for subgradient descent under error bound conditions. In doing so, we show that the convex program satisfies a particular error bound condition, the quadratic growth condition, and that the price sequence generated by tâtonnement is bounded above and away from zero. We also show that a similar convergence result holds for tâtonnement in quasi-linear Fisher markets. Numerical experiments are conducted to demonstrate that the theoretical linear convergence aligns with empirical observations. 
    more » « less
  5. Abstract This paper introduces a general framework of Semi-parametric TEnsor Factor Analysis (STEFA) that focuses on the methodology and theory of low-rank tensor decomposition with auxiliary covariates. Semi-parametric TEnsor Factor Analysis models extend tensor factor models by incorporating auxiliary covariates in the loading matrices. We propose an algorithm of iteratively projected singular value decomposition (IP-SVD) for the semi-parametric estimation. It iteratively projects tensor data onto the linear space spanned by the basis functions of covariates and applies singular value decomposition on matricized tensors over each mode. We establish the convergence rates of the loading matrices and the core tensor factor. The theoretical results only require a sub-exponential noise distribution, which is weaker than the assumption of sub-Gaussian tail of noise in the literature. Compared with the Tucker decomposition, IP-SVD yields more accurate estimators with a faster convergence rate. Besides estimation, we propose several prediction methods with new covariates based on the STEFA model. On both synthetic and real tensor data, we demonstrate the efficacy of the STEFA model and the IP-SVD algorithm on both the estimation and prediction tasks. 
    more » « less