skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Multivariate Monotone Inclusions in Saddle Form
We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. Under investigation is a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This model encompasses most formulations found in the literature. A limitation of existing primal-dual algorithms is that they operate in a product space that is too small to achieve full splitting of our problem in the sense that each operator is used individually. To circumvent this difficulty, we recast the problem as that of finding a zero of a saddle operator that acts on a bigger space. This leads to an algorithm of unprecedented flexibility, which achieves full splitting, exploits the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to activating all of them. The latter feature is of critical importance in large-scale problems. The weak convergence of the main algorithm is established, as well as the strong convergence of a variant. Various applications are discussed, and instantiations of the proposed framework in the context of variational inequalities and minimization problems are presented.  more » « less
Award ID(s):
1715671
PAR ID:
10335312
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Mathematics of Operations Research
Volume:
47
Issue:
2
ISSN:
0364-765X
Page Range / eLocation ID:
1082 to 1109
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We establish the convergence of the forward-backward splitting algorithm based on Bregman distances for the sum of two monotone operators in reflexive Banach spaces. Even in Euclidean spaces, the convergence of this algorithm has so far been proved only in the case of minimization problems. The proposed framework features Bregman distances that vary over the iterations and a novel assumption on the single-valued operator that captures various properties scattered in the literature. In the minimization setting, we obtain rates that are sharper than existing ones. 
    more » « less
  2. We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods and match the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings. 
    more » « less
  3. We propose a geometric framework to describe and analyse a wide array of operator splitting methods for solving monotone inclusion problems. The initial inclusion problem, which typically involves several operators combined through monotonicity-preserving operations, is seldom solvable in its original form. We embed it in an auxiliary space, where it is associated with a surrogate monotone inclusion problem with a more tractable structure and which allows for easy recovery of solutions to the initial problem. The surrogate problem is solved by successive projections onto half-spaces containing its solution set. The outer approximation half-spaces are constructed by using the individual operators present in the model separately. This geometric framework is shown to encompass traditional methods as well as state-of-the-art asynchronous block-iterative algorithms, and its flexible structure provides a pattern to design new ones. 
    more » « less
  4. Saddle-point problems appear in various settings including machine learning, zero-sum stochastic games, and regression problems. We consider decomposable saddle-point problems and study an extension of the alternating direction method of multipliers to such saddle-point problems. Instead of solving the original saddle-point problem directly, this algorithm solves smaller saddle-point problems by exploiting the decomposable structure. We show the convergence of this algorithm for convex-concave saddle-point problems under a mild assumption. We also provide a sufficient condition for which the assumption holds. We demonstrate the convergence properties of the saddle-point alternating direction method of multipliers with numerical examples on a power allocation problem in communication channels and a network routing problem with adversarial costs. 
    more » « less
  5. In this paper, we consider a class of monotone inclusion (MI) problems of finding a zero of the sum of two monotone operators, in which one operator is maximal monotone, whereas the other is locally Lipschitz continuous. We propose primal-dual (PD) extrapolation methods to solve them using a point and operator extrapolation technique, whose parameters are chosen by a backtracking line search scheme. The proposed methods enjoy an operation complexity of [Formula: see text] and [Formula: see text], measured by the number of fundamental operations consisting only of evaluations of one operator and resolvent of the other operator, for finding an ε-residual solution of strongly and nonstrongly MI problems, respectively. The latter complexity significantly improves the previously best operation complexity [Formula: see text]. As a byproduct, complexity results of the primal-dual extrapolation methods are also obtained for finding an ε-KKT or ε-residual solution of convex conic optimization, conic constrained saddle point, and variational inequality problems under local Lipschitz continuity. We provide preliminary numerical results to demonstrate the performance of the proposed methods. Funding: This work was partially supported by the National Science Foundation [Grant IIS-2211491], the Office of Naval Research [Grant N00014-24-1-2702], and the Air Force Office of Scientific Research [Grant FA9550-24-1-0343]. 
    more » « less