skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Distributed Consensus on Manifolds using the Riemannian Center of Mass
In this paper, we develop a distributed consensus algorithm for agents whose states evolve on a manifold. This algorithm is complementary to traditional consensus, predominantly developed for systems with dynamics on vector spaces. We provide theoretical convergence guarantees for the proposed manifold consensus provided that agents are initialized within a geodesically convex (g-convex) set. This required condition on initialization is not restrictive as g-convex sets may be comparatively “large” for relevant Riemannian manifolds. Our approach to manifold consensus builds upon the notion of Riemannian Center of Mass (RCM) and the intrinsic structure of the manifold to avoid projections in the ambient space. We first show that on a g-convex ball, all states coincide if and only if each agent’s state is the RCM of its neighbors’ states. This observation facilitates our convergence guarantee to the consensus submanifold. Finally, we provide simulation results that exemplify the linear convergence rate of the proposed algorithm and illustrates its statistical properties over randomly generated problem instances.  more » « less
Award ID(s):
2149470
PAR ID:
10422995
Author(s) / Creator(s):
Date Published:
Journal Name:
Control Technology and Applications
ISSN:
2768-0762
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality . In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus -based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus -based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning. 
    more » « less
  2. Abstract We give a necessary condition on a geodesic in a Riemannian manifold that can run in some convex hypersurface.As a corollary, we obtain peculiar properties that hold true for every convex set in any generic Riemannian manifold ( M , g ) {(M,g)} .For example, if a convex set in ( M , g ) {(M,g)} is bounded by a smooth hypersurface, then it is strictly convex. 
    more » « less
  3. We consider a class of multi-agent cooperative consensus optimization problems with local nonlinear convex constraints where only those agents connected by an edge can directly communicate, hence, the optimal consensus decision lies in the intersection of these private sets. We develop an asynchronous distributed accelerated primal-dual algorithm to solve the considered problem. The proposed scheme is the first asynchronous method with an optimal convergence guarantee for this class of problems, to the best of our knowledge. In particular, we provide an optimal convergence rate of $$\mathcal O(1/K)$$ for suboptimality, infeasibility, and consensus violation. 
    more » « less
  4. We consider a class of convex decentralized consensus optimization problems over connected multi-agent networks. Each agent in the network holds its local objective function privately, and can only communicate with its directly connected agents during the computation to find the minimizer of the sum of all objective functions. We propose a randomized incremental primal-dual method to solve this problem, where the dual variable over the network in each iteration is only updated at a randomly selected node, whereas the dual variables elsewhere remain the same as in the previous iteration. Thus, the communication only occurs in the neighborhood of the selected node in each iteration and hence can greatly reduce the chance of communication delay and failure in the standard fully synchronized consensus algorithms. We provide comprehensive convergence analysis including convergence rates of the primal residual and consensus error of the proposed algorithm, and conduct numerical experiments to show its performance using both uniform sampling and important sampling as node selection strategy. 
    more » « less
  5. We consider an in-network optimal resource allocation problem in which a group of agents interacting over a connected graph want to meet a demand while minimizing their collective cost. The contribution of this paper is to design a distributed continuous-time algorithm for this problem inspired by a recently developed first-order transformed primal-dual method. The solution applies to cluster-based setting where each agent may have a set of subagents, and its local cost is the sum of the cost of these subagents. The proposed algorithm guarantees an exponential convergence for strongly convex costs and asymptotic convergence for convex costs. Exponential convergence when the local cost functions are strongly convex is achieved even when the local gradients are only locally Lipschitz. For convex local cost functions, our algorithm guarantees asymptotic convergence to a point in the minimizer set. Through numerical examples, we show that our proposed algorithm delivers a faster convergence compared to existing distributed resource allocation algorithms. 
    more » « less