skip to main content


Title: A randomized incremental primal-dual method for decentralized consensus optimization
We consider a class of convex decentralized consensus optimization problems over connected multi-agent networks. Each agent in the network holds its local objective function privately, and can only communicate with its directly connected agents during the computation to find the minimizer of the sum of all objective functions. We propose a randomized incremental primal-dual method to solve this problem, where the dual variable over the network in each iteration is only updated at a randomly selected node, whereas the dual variables elsewhere remain the same as in the previous iteration. Thus, the communication only occurs in the neighborhood of the selected node in each iteration and hence can greatly reduce the chance of communication delay and failure in the standard fully synchronized consensus algorithms. We provide comprehensive convergence analysis including convergence rates of the primal residual and consensus error of the proposed algorithm, and conduct numerical experiments to show its performance using both uniform sampling and important sampling as node selection strategy.  more » « less
Award ID(s):
1925263 1818886
NSF-PAR ID:
10159818
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Analysis and Applications
ISSN:
0219-5305
Page Range / eLocation ID:
1 to 25
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality . In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus -based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus -based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning. 
    more » « less
  2. The aim of this paper is to find the distributed solution of the generalized Nash equilibrium problem (GNEP) for a group of players that can communicate with each other over a connected communication network. Each player tries to minimize a local objective function of its own that may depend on the other players’ decisions, and collectively all the players’ decisions are subject to some globally shared resource constraints. After reformulating the local optimization problems, we introduce the notion of network Lagrangian and recast the GNEP as the zero finding problem of a properly defined operator. Utilizing the Douglas-Rachford operator splitting method, a distributed algorithm is proposed that requires only local information exchanges between neighboring players in each iteration. The convergence of the proposed algorithm to an exact variational generalized Nash equilibrium is established under two different sets of assumptions. The effectiveness of the proposed algorithm is demonstrated using the example of a Nash-Cournot production game. 
    more » « less
  3. We consider a class of multi-agent cooperative consensus optimization problems with local nonlinear convex constraints where only those agents connected by an edge can directly communicate, hence, the optimal consensus decision lies in the intersection of these private sets. We develop an asynchronous distributed accelerated primal-dual algorithm to solve the considered problem. The proposed scheme is the first asynchronous method with an optimal convergence guarantee for this class of problems, to the best of our knowledge. In particular, we provide an optimal convergence rate of $\mathcal O(1/K)$ for suboptimality, infeasibility, and consensus violation. 
    more » « less
  4. We consider a class of multi-agent optimization problems, where each agent has a local objective function that depends on its own decision variables and the aggregate of others, and is willing to cooperate with other agents to minimize the sum of the local objectives. After associating each agent with an auxiliary variable and the related local estimates, we conduct primal decomposition to the globally coupled problem and reformulate it so that it can be solved distributedly. Based on the Douglas-Rachford method, an algorithm is proposed which ensures the exact convergence to a solution of the original problem. The proposed method enjoys desirable scalability by only requiring each agent to keep local estimates whose number grows linearly with the number of its neighbors. We illustrate our proposed algorithm by numerical simulations on a commodity distribution problem over a transport network. 
    more » « less
  5. This paper addresses the problem of decentralized learning in the presence of data poisoning attacks. In this problem, we consider a collection of nodes connected through a network, each equipped with a local function. The objective is to compute the global minimizer of the aggregated local functions, in a decentralized manner, i.e., each node can only use its local function and data exchanged with nodes it is connected to. Moreover, each node is to agree on the said minimizer despite an adversary that can arbitrarily change the local functions of a fraction of the nodes. This problem setting has applications in robust learning, where nodes in a network are collectively training a model that minimizes the empirical loss with possibly attacked local data sets. In this paper, we propose a novel decentralized learning algorithm that enables all nodes to reach consensus on the optimal model, in the absence of attacks, and approximate consensus in the presence of data poisoning attacks. 
    more » « less