skip to main content


Title: Multivariate Monotone Inclusions in Saddle Form
We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. Under investigation is a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This model encompasses most formulations found in the literature. A limitation of existing primal-dual algorithms is that they operate in a product space that is too small to achieve full splitting of our problem in the sense that each operator is used individually. To circumvent this difficulty, we recast the problem as that of finding a zero of a saddle operator that acts on a bigger space. This leads to an algorithm of unprecedented flexibility, which achieves full splitting, exploits the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to activating all of them. The latter feature is of critical importance in large-scale problems. The weak convergence of the main algorithm is established, as well as the strong convergence of a variant. Various applications are discussed, and instantiations of the proposed framework in the context of variational inequalities and minimization problems are presented.  more » « less
Award ID(s):
1715671
NSF-PAR ID:
10335312
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Mathematics of Operations Research
Volume:
47
Issue:
2
ISSN:
0364-765X
Page Range / eLocation ID:
1082 to 1109
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods and match the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings. 
    more » « less
  2. We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods and match the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings. 
    more » « less
  3. null (Ed.)
    We establish the convergence of the forward-backward splitting algorithm based on Bregman distances for the sum of two monotone operators in reflexive Banach spaces. Even in Euclidean spaces, the convergence of this algorithm has so far been proved only in the case of minimization problems. The proposed framework features Bregman distances that vary over the iterations and a novel assumption on the single-valued operator that captures various properties scattered in the literature. In the minimization setting, we obtain rates that are sharper than existing ones. 
    more » « less
  4. The aim of this paper is to find the distributed solution of the generalized Nash equilibrium problem (GNEP) for a group of players that can communicate with each other over a connected communication network. Each player tries to minimize a local objective function of its own that may depend on the other players’ decisions, and collectively all the players’ decisions are subject to some globally shared resource constraints. After reformulating the local optimization problems, we introduce the notion of network Lagrangian and recast the GNEP as the zero finding problem of a properly defined operator. Utilizing the Douglas-Rachford operator splitting method, a distributed algorithm is proposed that requires only local information exchanges between neighboring players in each iteration. The convergence of the proposed algorithm to an exact variational generalized Nash equilibrium is established under two different sets of assumptions. The effectiveness of the proposed algorithm is demonstrated using the example of a Nash-Cournot production game. 
    more » « less
  5. Buttazzo, G. ; Casas, E. ; de Teresa, L. ; Glowinsk, R. ; Leugering, G. ; Trélat, E. ; Zhang, X. (Ed.)
    In this article, we report the results we obtained when investigating the numerical solution of some nonlinear eigenvalue problems for the Monge-Ampère operator v → det D 2 v . The methodology we employ relies on the following ingredients: (i) a divergence formulation of the eigenvalue problems under consideration. (ii) The time discretization by operator-splitting of an initial value problem (a kind of gradient flow) associated with each eigenvalue problem. (iii) A finite element approximation relying on spaces of continuous piecewise affine functions. To validate the above methodology, we applied it to the solution of problems with known exact solutions: The results we obtained suggest convergence to the exact solution when the space discretization step h → 0. We considered also test problems with no known exact solutions. 
    more » « less