In this paper, we develop a distributed consensus algorithm for agents whose states evolve on a manifold. This algorithm is complementary to traditional consensus, predominantly developed for systems with dynamics on vector spaces. We provide theoretical convergence guarantees for the proposed manifold consensus provided that agents are initialized within a geodesically convex (g-convex) set. This required condition on initialization is not restrictive as g-convex sets may be comparatively “large” for relevant Riemannian manifolds. Our approach to manifold consensus builds upon the notion of Riemannian Center of Mass (RCM) and the intrinsic structure of the manifold to avoid projections in the ambient space. We first show that on a g-convex ball, all states coincide if and only if each agent’s state is the RCM of its neighbors’ states. This observation facilitates our convergence guarantee to the consensus submanifold. Finally, we provide simulation results that exemplify the linear convergence rate of the proposed algorithm and illustrates its statistical properties over randomly generated problem instances.
more »
« less
A unifying convex analysis and switching system approach to consensus with undirected communication graphs
Switching between finitely many continuous-time autonomous steepest descent dynamics for convex functions is considered. Convergence of complete solutions to common minimizers of the convex functions, if such minimizers exist, is shown. The convex functions need not be smooth and may be subject to constraints. Since the common minimizers may represent consensus in a multi-agent system modeled by an undirected communication graph, several known results about asymptotic consensus are deduced as special cases. Extensions to time-varying convex functions and to dynamics given by set-valued mappings more general than subdifferentials of convex functions are included.
more »
« less
- Award ID(s):
- 1710621
- PAR ID:
- 10094205
- Date Published:
- Journal Name:
- 2018 IEEE Conference on Decision and Control
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The problem of finding the minimizer of a sum of convex functions is central to the field of distributed optimization. Thus, it is of interest to understand how that minimizer is related to the properties of the individual functions in the sum. In this paper, we provide an upper bound on the region containing the minimizer of the sum of two strongly convex functions. We consider two scenarios with different constraints on the upper bound of the gradients of the functions. In the first scenario, the gradient constraint is imposed on the location of the potential minimizer, while in the second scenario, the gradient constraint is imposed on a given convex set in which the minimizers of two original functions are embedded. We characterize the boundaries of the regions containing the minimizer in both scenarios.more » « less
-
null (Ed.)Abstract Combining the classical theory of optimal transport with modern operator splitting techniques, we develop a new numerical method for nonlinear, nonlocal partial differential equations, arising in models of porous media, materials science, and biological swarming. Our method proceeds as follows: first, we discretize in time, either via the classical JKO scheme or via a novel Crank–Nicolson-type method we introduce. Next, we use the Benamou–Brenier dynamical characterization of the Wasserstein distance to reduce computing the solution of the discrete time equations to solving fully discrete minimization problems, with strictly convex objective functions and linear constraints. Third, we compute the minimizers by applying a recently introduced, provably convergent primal dual splitting scheme for three operators (Yan in J Sci Comput 1–20, 2018). By leveraging the PDEs’ underlying variational structure, our method overcomes stability issues present in previous numerical work built on explicit time discretizations, which suffer due to the equations’ strong nonlinearities and degeneracies. Our method is also naturally positivity and mass preserving and, in the case of the JKO scheme, energy decreasing. We prove that minimizers of the fully discrete problem converge to minimizers of the spatially continuous, discrete time problem as the spatial discretization is refined. We conclude with simulations of nonlinear PDEs and Wasserstein geodesics in one and two dimensions that illustrate the key properties of our approach, including higher-order convergence our novel Crank–Nicolson-type method, when compared to the classical JKO method.more » « less
-
Structured convex optimization problems in image recovery typically involve a mix of smooth and nonsmooth functions. The common practice is to activate the smooth functions via their gradient and the nonsmooth ones via their proximity operator. We show that, although intuitively natural, this approach is not necessarily the most efficient numerically and that, in particular, activating all the functions proximally may be advantageous. To make this viewpoint viable computationally, we derive a number of new examples of proximity operators of smooth convex functions arising in applications.more » « less
-
Abstract Minimizing an adversarial surrogate risk is a common technique for learning robust classifiers. Prior work showed that convex surrogate losses are not statistically consistent in the adversarial context – or in other words, a minimizing sequence of the adversarial surrogate risk will not necessarily minimize the adversarial classification error. We connect the consistency of adversarial surrogate losses to properties of minimizers to the adversarial classification risk, known asadversarial Bayes classifiers. Specifically, under reasonable distributional assumptions, a convex surrogate loss is statistically consistent for adversarial learning iff the adversarial Bayes classifier satisfies a certain notion of uniqueness.more » « less
An official website of the United States government

