skip to main content


Title: An Accelerated Asynchronous Distributed Method for Convex Constrained Optimization Problems
We consider a class of multi-agent cooperative consensus optimization problems with local nonlinear convex constraints where only those agents connected by an edge can directly communicate, hence, the optimal consensus decision lies in the intersection of these private sets. We develop an asynchronous distributed accelerated primal-dual algorithm to solve the considered problem. The proposed scheme is the first asynchronous method with an optimal convergence guarantee for this class of problems, to the best of our knowledge. In particular, we provide an optimal convergence rate of $\mathcal O(1/K)$ for suboptimality, infeasibility, and consensus violation.  more » « less
Award ID(s):
2127696
NSF-PAR ID:
10443651
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2023 57th Annual Conference on Information Sciences and Systems (CISS)
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods. 
    more » « less
  2. We consider a class of convex decentralized consensus optimization problems over connected multi-agent networks. Each agent in the network holds its local objective function privately, and can only communicate with its directly connected agents during the computation to find the minimizer of the sum of all objective functions. We propose a randomized incremental primal-dual method to solve this problem, where the dual variable over the network in each iteration is only updated at a randomly selected node, whereas the dual variables elsewhere remain the same as in the previous iteration. Thus, the communication only occurs in the neighborhood of the selected node in each iteration and hence can greatly reduce the chance of communication delay and failure in the standard fully synchronized consensus algorithms. We provide comprehensive convergence analysis including convergence rates of the primal residual and consensus error of the proposed algorithm, and conduct numerical experiments to show its performance using both uniform sampling and important sampling as node selection strategy. 
    more » « less
  3. With the emergence of the Internet of Things that allows communications and local computations and with the vision of Industry 4.0, a foreseeable transition is from centralized system planning and operation toward decentralization with interacting components and subsystems, e.g., self-optimizing factories. In this article, a new ``price-based'' decomposition and coordination methodology is developed to efficiently coordinate a system consisting of distributed subsystems such as machines and parts, which are described by mixed-integer linear programming (MILP) formulations, in an asynchronous way. The novel method is a dual approach, whereby the coordination is performed by updating Lagrangian multipliers based on economic principles of ``supply and demand.'' To ensure low communication requirements within the method, exchanges between the ``coordinator'' and subsystems are limited to ``prices'' (Lagrangian multipliers) broadcast by the coordinator and to subsystem solutions sent at the coordinator. Asynchronous coordination, however, may lead to convergence difficulties since the order in which subsystem solutions arrive at the coordinator is not predefined as a result of uncertainties in communication and solving times. Under realistic assumptions of finite communication and solve times, the convergence of our method is proven by innovatively extending the Lyapunov stability theory. Numerical testing of generalized assignment problems through simulation demonstrates that the method converges fast and provides near-optimal results, paving the way for self-optimizing factories in the future. Accompanying CPLEX codes and data are included. 
    more » « less
  4. In many scenarios, information must be disseminated over intermittently-connected environments when the network infrastructure becomes unavailable, e.g., during disasters where first responders need to send updates about critical tasks. If such updates pertain to a shared data set, dissemination consistency is important. This can be achieved through causal ordering and consensus. Popular consensus algorithms, e.g., Paxos, are most suited for connected environments. While some work has been done on designing consensus algorithms for intermittently-connected environments, such as the One-Third Rule (OTR) algorithm, there is still need to improve their efficiency and timely completion. We propose CoNICE, a framework to ensure consistent dissemination of updates among users in intermittently-connected, infrastructure-less environments. It achieves efficiency by exploiting hierarchical namespaces for faster convergence, and lower communication overhead. CoNICE provides three levels of consistency to users, namely replication, causality and agreement. It uses epidemic propagation to provide adequate replication ratios, and optimizes and extends Vector Clocks to provide causality. To ensure agreement, CoNICE extends OTR to also support long-term network fragmentation and decision invalidation scenarios; we define local and global consensus pertaining to within and across fragments respectively. We integrate CoNICE's consistency preservation with a naming schema that follows a topic hierarchy-based dissemination framework, to improve functionality and performance. Using the Heard-Of model formalism, we prove CoNICE's consensus to be correct. Our technique extends previously established proof methods for consensus in asynchronous environments. Performing city-scale simulation, we demonstrate CoNICE's scalability in achieving consistency in convergence time, utilization of network resources, and reduced energy consumption. 
    more » « less
  5. Ruiz, Francisco ; Dy, Jennifer ; van de Meent, Jan-Willem (Ed.)
    In this paper, we study a class of bilevel optimization problems, also known as simple bilevel optimization, where we minimize a smooth objective function over the optimal solution set of another convex constrained optimization problem. Several iterative methods have been developed for tackling this class of problems. Alas, their convergence guarantees are either asymptotic for the upper-level objective, or the convergence rates are slow and sub-optimal. To address this issue, in this paper, we introduce a novel bilevel optimization method that locally approximates the solution set of the lower-level problem via a cutting plane and then runs a conditional gradient update to decrease the upper-level objective. When the upper-level objective is convex, we show that our method requires ${O}(\max\{1/\epsilon_f,1/\epsilon_g\})$ iterations to find a solution that is $\epsilon_f$-optimal for the upper-level objective and $\epsilon_g$-optimal for the lower-level objective. Moreover, when the upper-level objective is non-convex, our method requires ${O}(\max\{1/\epsilon_f^2,1/(\epsilon_f\epsilon_g)\})$ iterations to find an $(\epsilon_f,\epsilon_g)$-optimal solution. We also prove stronger convergence guarantees under the Holderian error bound assumption on the lower-level problem. To the best of our knowledge, our method achieves the best-known iteration complexity for the considered class of bilevel problems. 
    more » « less