skip to main content

Title: On Consensus-Optimality Trade-offs in Collaborative Deep Learning
In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality . In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus -based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus -based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning.
Authors:
; ; ;
Award ID(s):
1845969 2005804
Publication Date:
NSF-PAR ID:
10318055
Journal Name:
Frontiers in Artificial Intelligence
Volume:
4
ISSN:
2624-8212
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider a class of convex decentralized consensus optimization problems over connected multi-agent networks. Each agent in the network holds its local objective function privately, and can only communicate with its directly connected agents during the computation to find the minimizer of the sum of all objective functions. We propose a randomized incremental primal-dual method to solve this problem, where the dual variable over the network in each iteration is only updated at a randomly selected node, whereas the dual variables elsewhere remain the same as in the previous iteration. Thus, the communication only occurs in the neighborhood of themore »selected node in each iteration and hence can greatly reduce the chance of communication delay and failure in the standard fully synchronized consensus algorithms. We provide comprehensive convergence analysis including convergence rates of the primal residual and consensus error of the proposed algorithm, and conduct numerical experiments to show its performance using both uniform sampling and important sampling as node selection strategy.« less
  2. In this paper, we study communication-efficient decentralized training of large-scale machine learning models over a network. We propose and analyze SQuARM-SGD, a decentralized training algorithm, employing momentum and compressed communication between nodes regulated by a locally computable triggering rule. In SQuARM-SGD, each node performs a fixed number of local SGD (stochastic gradient descent) steps using Nesterov's momentum and then sends sparisified and quantized updates to its neighbors only when there is a significant change in its model parameters since the last time communication occurred. We provide convergence guarantees of our algorithm for strongly-convex and non-convex smooth objectives. We believe thatmore »ours is the first theoretical analysis for compressed decentralized SGD with momentum updates. We show that SQuARM-SGD converges at rate O(1/nT) for strongly-convex objectives, while for non-convex objectives it converges at rate O(1/√nT), thus matching the convergence rate of \emphvanilla distributed SGD in both these settings. We corroborate our theoretical understanding with experiments and compare the performance of our algorithm with the state-of-the-art, showing that without sacrificing much on the accuracy, SQuARM-SGD converges at a similar rate while saving significantly in total communicated bits.« less
  3. Distributed stochastic gradient descent (SGD) is essential for scaling the machine learning algorithms to a large number of computing nodes. However, the infrastructures variability such as high communication delay or random node slowdown greatly impedes the performance of distributed SGD algorithm, especially in a wireless system or sensor networks. In this paper, we propose an algorithmic approach named Overlap Local-SGD (and its momentum variant) to overlap communication and computation so as to speedup the distributed training procedure. The approach can help to mitigate the straggler effects as well. We achieve this by adding an anchor model on each node. Aftermore »multiple local updates, locally trained models will be pulled back towards the synchronized anchor model rather than communicating with others. Experimental results of training a deep neural network on CIFAR-10 dataset demonstrate the effectiveness of Overlap Local-SGD. We also provide a convergence guarantee for the proposed algorithm under non-convex objective functions.« less
  4. This paper studies distributed submodular optimization subject to partition matroid. We work in the value oracle model where the only access of the agents to the utility function is through a black box that returns the utility function value. The agents are communicating over a connected undirected graph and have access only to their own strategy set. As known in the literature, submodular maximization subject to matroid constraints is NP-hard. Hence, our objective is to propose a polynomial-time distributed algorithm to obtain a suboptimal solution with guarantees on the optimality bound. Our proposed algorithm is based on a distributed stochasticmore »gradient ascent scheme built on the multilinear-extension of the submodular set function. We use a maximum consensus protocol to minimize the inconsistency of the shared information over the network caused by delay in the flow of information while solving for the fractional solution of the multilinear extension model. Furthermore, we propose a distributed framework of finding a set solution using the fractional solution. We show that our distributed algorithm results in a strategy set that when the team objective function is evaluated at worst case the objective function value is in 1−1/e−O(1/T) of the optimal solution in the value oracle model where T is the number of communication instances of the agents. An example demonstrates our results.« less
  5. In many real-world scenarios, the time it takes for a mobile agent, e.g., a robot, to move from one location to another may vary due to exogenous events and be difficult to predict accurately. Planning in such scenarios is challenging, especially in the context of Multi-Agent Pathfinding (MAPF), where the goal is to find paths to multiple agents and temporal coordination is necessary to avoid collisions. In this work, we consider a MAPF problem with this form of time uncertainty, where we are only given upper and lower bounds on the time it takes each agent to move. The objectivemore »is to find a safe solution, which is a solution that can be executed by all agents and is guaranteed to avoid collisions. We propose two complete and optimal algorithms for finding safe solutions based on well-known MAPF algorithms, namely, A* with Operator Decomposition (A* + OD) and Conflict-Based Search (CBS). Experimentally, we observe that on several standard MAPF grids the CBS-based algorithm performs better. We also explore the option of online replanning in this context, i.e., modifying the agents' plans during execution, to reduce the overall execution cost. We consider two online settings: (a) when an agent can sense the current time and its current location, and (b) when the agents can also communicate seamlessly during execution. For each setting, we propose a replanning algorithm and analyze its behavior theoretically and empirically. Our experimental evaluation confirms that indeed online replanning in both settings can significantly reduce solution cost.« less