In networks consisting of agents communicating with a central coordinator and working together to solve a global optimization problem in a distributed manner, the agents are often required to solve private proximal minimization subproblems. Such a setting often requires a further decomposition method to solve the global distributed problem, resulting in extensive communication overhead. In networks where communication is expensive, it is crucial to reduce the communication overhead of the distributed optimization scheme. Integrating Gaussian processes (GP) as a learning component to the Alternating Direction Method of Multipliers (ADMM) has proven effective in learning each agent's local proximal operator to reduce the required communication exchange. In this work, we propose to combine this learning method with adaptive uniform quantization in a hybrid approach that can achieve further communication reduction when solving a distributed optimization problem with ADMM. This adaptive quantization first considers setting the mid-value and window length according to the mean and covariance given by GP. In a later stage of our study, this adaptation is extended to also consider the variation of the quantization bit resolution. In addition, a convergence analysis of this setting is derived, leading to convergence conditions and error bounds in the cases where convergence cannot be formally proven. Furthermore, we study the impact of the communication decision-making of the coordinator, leading to the proposition of several query strategies using the agent's uncertainty measures given by the regression process. Extensive numerical experiments of a distributed sharing problem with quadratic cost functions for the agents have been conducted throughout this study. The results have demonstrated that the various algorithms proposed have successfully achieved their primary goal of minimizing the overall communication overhead while ensuring that the global solutions maintain satisfactory levels of accuracy. The favorable accuracy observed in the numerical experiments is consistent with the findings of the derived convergence analysis. In instances where convergence proof is lacking, we have shown that the overall ADMM residual remains bounded by a diminishing threshold. This implies that we can anticipate our algorithmic solutions to closely approximate the actual solution, thus validating the reliability of our approaches.
more »
« less
Communication-efficient ADMM using quantization-aware Gaussian process regression
In networks consisting of agents communicating with a central coordinator and working together to solve a global optimization problem in a distributed manner, the agents are often required to solve private proximal minimization subproblems. Such a setting often requires a decomposition method to solve the global distributed problem, resulting in extensive communication overhead. In networks where communication is expensive, it is crucial to reduce the communication overhead of the distributed optimization scheme. Gaussian processes (GPs) are effective at learning the agents' local proximal operators, thereby reducing the communication between the agents and the coordinator. We propose combining this learning method with adaptive uniform quantization for a hybrid approach that can achieve further communication reduction. In our approach, due to data quantization, the GP algorithm is modified to account for the introduced quantization noise statistics. We further improve our approach by introducing an orthogonalization process to the quantizer's input to address the inherent correlation of the input components. We also use dithering to ensure uncorrelation between the quantizer's introduced noise and its input. We propose multiple measures to quantify the trade-off between the communication cost reduction and the optimization solution's accuracy/optimality. Under such metrics, our proposed algorithms can achieve significant communication reduction for distributed optimization with acceptable accuracy, even at low quantization resolutions. This result is demonstrated by simulations of a distributed sharing problem with quadratic cost functions for the agents.
more »
« less
- PAR ID:
- 10578637
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- EURO Journal on Computational Optimization
- Volume:
- 12
- Issue:
- C
- ISSN:
- 2192-4406
- Page Range / eLocation ID:
- 100098
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In distributed optimization schemes consisting of a group of agents connected to a central coordinator, the optimization algorithm often involves the agents solving private local sub-problems and exchanging data frequently with the coordinator to solve the global distributed problem. In those cases, the query-response mechanism usually causes excessive communication costs to the system, necessitating communication reduction in scenarios where communication is costly. Integrating Gaussian processes (GP) as a learning component to the Alternating Direction Method of Multipliers (ADMM) has proven effective in learning each agent’s local proximal operator to reduce the required communication exchange. A key element for integrating GP into the ADMM algorithm is the querying mechanism upon which the coordinator decides when communication with an agent is required. In this paper, we formulate a general querying decision framework as an optimization problem that balances reducing the communication cost and decreasing the prediction error. Under this framework, we propose a joint query strategy that takes into account the joint statistics of the query and ADMM variables and the total communication cost of all agents in the presence of uncertainty caused by the GP regression. In addition, we derive three different decision mechanisms that simplify the general framework by making the communication decision for each agent individually. We integrate multiple measures to quantify the trade-off between the communication cost reduction and the optimization solution’s accuracy/optimality. The proposed methods can achieve significant communication reduction and good optimization solution accuracy for distributed optimization, as demonstrated by extensive simulations of a distributed sharing problem.more » « less
-
A critical factor for expanding the adoption of networked solutions is ensuring local data privacy of in-network agents implementing a distributed algorithm. In this paper, we consider privacy preservation in the distributed optimization problem in the sense that local cost parameters should not be revealed. Current approaches to privacy preservation normally propose methods that sacrifice exact convergence or increase communication overhead. We propose PrivOpt, an intrinsically private distributed optimization algorithm that converges exponentially fast without any convergence error or using extra communication channels. We show that when the number of the parameters of the local cost is greater than the dimension of the decision variable of the problem, no malicious agent, even if it has access to all transmitted-in and -out messages in the network, can obtain local cost parameters of other agents. As an application study, we show how our proposed PrivOpt algorithm can be used to solve an optimal resource allocation problem with the guarantees that the local cost parameters of all the agents stay private.more » « less
-
In multi-agent Bayesian optimization for Design Space Exploration (DSE), identifying a communication network among agents to share useful design information for enhanced cooperation and performance, considering the trade-off between connectivity and cost, poses significant challenges. To address this challenge, we develop a distributed multi-agent Bayesian optimization (DMABO) framework and study how communication network structures/connectivity and the resulting cost would impact the performance of a team of agents when finding the global optimum. Specifically, we utilize Lloyd’s algorithm to partition the design space to assign distinct regions to individual agents for exploration in the distributed multi-agent system (MAS). Based on this partitioning, we generate communication networks among agents using two models: 1) a range-limited model of communication constrained by neighborhood information; and 2) a range-free model without neighborhood constraints. We introduce network density as a metric to quantify communication costs. Then, we generate communication networks by gradually increasing the network density to assess the impact of communication costs on the performance of MAS in DSE. The experimental results show that the communication network based on the range-limited model can significantly improve performance without incurring high communication costs. This indicates that increasing the density of a communication network does not necessarily improve MAS performance in DSE. Furthermore, the results indicate that communication is only beneficial for team performance if it occurs between specific agents whose search regions are critically relevant to the location of the global optimum. The proposed DMABO framework and the insights obtained can help identify the best trade-off between communication structure and cost for MAS in unknown design space exploration.more » « less
-
This paper studies distributed Q-learning for Linear Quadratic Regulator (LQR) in a multi-agent network. The existing results often assume that agents can observe the global system state, which may be infeasible in large-scale systems due to privacy concerns or communication constraints. In this work, we consider a setting with unknown system models and no centralized coordinator. We devise a state tracking (ST) based Q-learning algorithm to design optimal controllers for agents. Specifically, we assume that agents maintain local estimates of the global state based on their local information and communications with neighbors. At each step, every agent updates its local global state estimation, based on which it solves an approximate Q-factor locally through policy iteration. Assuming a decaying injected excitation noise during the policy evaluation, we prove that the local estimation converges to the true global state, and establish the convergence of the proposed distributed ST-based Q-learning algorithm. The experimental studies corroborate our theoretical results by showing that our proposed method achieves comparable performance with the centralized case.more » « less
An official website of the United States government

