We consider a class of multi-agent cooperative consensus optimization problems with local nonlinear convex constraints where only those agents connected by an edge can directly communicate, hence, the optimal consensus decision lies in the intersection of these private sets. We develop an asynchronous distributed accelerated primal-dual algorithm to solve the considered problem. The proposed scheme is the first asynchronous method with an optimal convergence guarantee for this class of problems, to the best of our knowledge. In particular, we provide an optimal convergence rate of $$\mathcal O(1/K)$$ for suboptimality, infeasibility, and consensus violation.
more »
« less
A randomized incremental primal-dual method for decentralized consensus optimization
We consider a class of convex decentralized consensus optimization problems over connected multi-agent networks. Each agent in the network holds its local objective function privately, and can only communicate with its directly connected agents during the computation to find the minimizer of the sum of all objective functions. We propose a randomized incremental primal-dual method to solve this problem, where the dual variable over the network in each iteration is only updated at a randomly selected node, whereas the dual variables elsewhere remain the same as in the previous iteration. Thus, the communication only occurs in the neighborhood of the selected node in each iteration and hence can greatly reduce the chance of communication delay and failure in the standard fully synchronized consensus algorithms. We provide comprehensive convergence analysis including convergence rates of the primal residual and consensus error of the proposed algorithm, and conduct numerical experiments to show its performance using both uniform sampling and important sampling as node selection strategy.
more »
« less
- PAR ID:
- 10159818
- Date Published:
- Journal Name:
- Analysis and Applications
- ISSN:
- 0219-5305
- Page Range / eLocation ID:
- 1 to 25
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality . In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus -based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus -based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning.more » « less
-
We consider an in-network optimal resource allocation problem in which a group of agents interacting over a connected graph want to meet a demand while minimizing their collective cost. The contribution of this paper is to design a distributed continuous-time algorithm for this problem inspired by a recently developed first-order transformed primal-dual method. The solution applies to cluster-based setting where each agent may have a set of subagents, and its local cost is the sum of the cost of these subagents. The proposed algorithm guarantees an exponential convergence for strongly convex costs and asymptotic convergence for convex costs. Exponential convergence when the local cost functions are strongly convex is achieved even when the local gradients are only locally Lipschitz. For convex local cost functions, our algorithm guarantees asymptotic convergence to a point in the minimizer set. Through numerical examples, we show that our proposed algorithm delivers a faster convergence compared to existing distributed resource allocation algorithms.more » « less
-
We consider a class of multi-agent optimization problems, where each agent has a local objective function that depends on its own decision variables and the aggregate of others, and is willing to cooperate with other agents to minimize the sum of the local objectives. After associating each agent with an auxiliary variable and the related local estimates, we conduct primal decomposition to the globally coupled problem and reformulate it so that it can be solved distributedly. Based on the Douglas-Rachford method, an algorithm is proposed which ensures the exact convergence to a solution of the original problem. The proposed method enjoys desirable scalability by only requiring each agent to keep local estimates whose number grows linearly with the number of its neighbors. We illustrate our proposed algorithm by numerical simulations on a commodity distribution problem over a transport network.more » « less
-
This paper addresses the problem of decentralized learning in the presence of data poisoning attacks. In this problem, we consider a collection of nodes connected through a network, each equipped with a local function. The objective is to compute the global minimizer of the aggregated local functions, in a decentralized manner, i.e., each node can only use its local function and data exchanged with nodes it is connected to. Moreover, each node is to agree on the said minimizer despite an adversary that can arbitrarily change the local functions of a fraction of the nodes. This problem setting has applications in robust learning, where nodes in a network are collectively training a model that minimizes the empirical loss with possibly attacked local data sets. In this paper, we propose a novel decentralized learning algorithm that enables all nodes to reach consensus on the optimal model, in the absence of attacks, and approximate consensus in the presence of data poisoning attacks.more » « less
An official website of the United States government

