We consider the problem of distributed pose graph optimization (PGO) that has important applications in multi- robot simultaneous localization and mapping (SLAM). We pro- pose the majorization minimization (MM) method for distributed PGO (MM−PGO) that applies to a broad class of robust loss kernels. The MM−PGO method is guaranteed to converge to first-order critical points under mild conditions. Furthermore, noting that the MM−PGO method is reminiscent of proximal methods, we leverage Nesterov’s method and adopt adaptive restarts to accelerate convergence. The resulting accelerated MM methods for distributed PGO—both with a master node in the network (AMM−PGO∗) and without (AMM−PGO#)— have faster convergence in contrast to the MM−PGO method without sacrificing theoretical guarantees. In particular, the AMM−PGO# method, which needs no master node and is fully decentralized, features a novel adaptive restart scheme and has a rate of convergence comparable to that of the AMM−PGO∗ method using a master node to aggregate information from all the nodes. The efficacy of this work is validated through extensive applications to 2D and 3D SLAM benchmark datasets and comprehensive comparisons against existing state-of-the-art methods, indicating that our MM methods converge faster and result in better solutions to distributed PGO. The code is available at https://github.com/MurpheyLab/DPGO.
more »
« less
Majorization Minimization Methods for Distributed Pose Graph Optimization with Convergence Guarantees
In this paper, we consider the problem of distributed
pose graph optimization (PGO) that has extensive
applications in multi-robot simultaneous localization and mapping
(SLAM). We propose majorization minimization methods
for distributed PGO and show that our proposed methods are
guaranteed to converge to first-order critical points under mild
conditions. Furthermore, since our proposed methods rely a
proximal operator of distributed PGO, the convergence rate
can be significantly accelerated with Nesterov’s method, and
more importantly, the acceleration induces no compromise of
theoretical guarantees. In addition, we also present accelerated
majorization minimization methods for the distributed chordal
initialization that have a quadratic convergence, which can
be used to compute an initial guess for distributed PGO.
The efficacy of this work is validated through applications
on a number of 2D and 3D SLAM datasets and comparisons
with existing state-of-the-art methods, which indicates that our
proposed methods have faster convergence and result in better
solutions to distributed PGO.
more »
« less
- Award ID(s):
- 1662233
- PAR ID:
- 10283591
- Date Published:
- Journal Name:
- International Conference on Intelligent Robots and Systems
- Page Range / eLocation ID:
- 5058 to 5065
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)In this paper, we generalize proximal methods that were originally designed for convex optimization on normed vector space to non-convex pose graph optimization (PGO) on special Euclidean groups, and show that our proposed generalized proximal methods for PGO converge to first-order critical points. Furthermore, we propose methods that significantly accelerate the rates of convergence almost without loss of any theoretical guarantees. In addition, our proposed methods can be easily distributed and parallelized with no compromise of efficiency. The efficacy of this work is validated through implementation on simultaneous localization and mapping (SLAM) and distributed 3D sensor network localization, which indicate that our proposed methods are a lot faster than existing techniques to converge to sufficient accuracy for practical use.more » « less
-
In this paper, we present CPL-Sync, a certifiably correct algorithm to solve planar pose graph optimization (PGO) using the complex number representation. We formulate planar PGO as the maximum likelihood estimation (MLE) on the product of unit complex numbers, and relax this nonconvex quadratic complex optimization problem to complex semidefinite programming (SDP). Furthermore, we simplify the corresponding semidefinite programming to Riemannian staircase optimization (RSO) on complex oblique manifolds that can be solved with the Riemannian trust region (RTR) method. In addition, we prove that the SDP relaxation and RSO simplification are tight as long as the noise magnitude is below a certain threshold. The efficacy of this work is validated through comparisons with existing methods as well as applications on planar PGO in simultaneous localization and mapping (SLAM), which indicates that the proposed algorithm is more efficient and capable of solving planar PGO certifiably. The C++ code for CPL-Sync is available at https://github. com/fantaosha/CPL- Sync.more » « less
-
We present a novel framework for collaboration amongst a team of robots performing Pose Graph Optimization (PGO) that addresses two important challenges for multi-robot SLAM: i) that of enabling information exchange "on-demand" via Active Rendezvous without using a map or the robot's location, and ii) that of rejecting outlying measurements. Our key insight is to exploit relative position data present in the communication channel between robots to improve groundtruth accuracy of PGO. We develop an algorithmic and experimental framework for integrating Channel State Information (CSI) with multi-robot PGO; it is distributed, and applicable in low-lighting or featureless environments where traditional sensors often fail. We present extensive experimental results on actual robots and observe that using Active Rendezvous results in a 64% reduction in ground truth pose error and that using CSI observations to aid outlier rejection reduces ground truth pose error by 32%. These results show the potential of integrating communication as a novel sensor for SLAM.more » « less
-
In recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method.more » « less