skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2449927

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In networks consisting of agents communicating with a central coordinator and working together to solve a global optimization problem in a distributed manner, the agents are often required to solve private proximal minimization subproblems. Such a setting often requires a decomposition method to solve the global distributed problem, resulting in extensive communication overhead. In networks where communication is expensive, it is crucial to reduce the communication overhead of the distributed optimization scheme. Gaussian processes (GPs) are effective at learning the agents' local proximal operators, thereby reducing the communication between the agents and the coordinator. We propose combining this learning method with adaptive uniform quantization for a hybrid approach that can achieve further communication reduction. In our approach, due to data quantization, the GP algorithm is modified to account for the introduced quantization noise statistics. We further improve our approach by introducing an orthogonalization process to the quantizer's input to address the inherent correlation of the input components. We also use dithering to ensure uncorrelation between the quantizer's introduced noise and its input. We propose multiple measures to quantify the trade-off between the communication cost reduction and the optimization solution's accuracy/optimality. Under such metrics, our proposed algorithms can achieve significant communication reduction for distributed optimization with acceptable accuracy, even at low quantization resolutions. This result is demonstrated by simulations of a distributed sharing problem with quadratic cost functions for the agents. 
    more » « less
  2. In networks consisting of agents communicating with a central coordinator and working together to solve a global optimization problem in a distributed manner, the agents are often required to solve private proximal minimization subproblems. Such a setting often requires a further decomposition method to solve the global distributed problem, resulting in extensive communication overhead. In networks where communication is expensive, it is crucial to reduce the communication overhead of the distributed optimization scheme. Integrating Gaussian processes (GP) as a learning component to the Alternating Direction Method of Multipliers (ADMM) has proven effective in learning each agent's local proximal operator to reduce the required communication exchange. In this work, we propose to combine this learning method with adaptive uniform quantization in a hybrid approach that can achieve further communication reduction when solving a distributed optimization problem with ADMM. This adaptive quantization first considers setting the mid-value and window length according to the mean and covariance given by GP. In a later stage of our study, this adaptation is extended to also consider the variation of the quantization bit resolution. In addition, a convergence analysis of this setting is derived, leading to convergence conditions and error bounds in the cases where convergence cannot be formally proven. Furthermore, we study the impact of the communication decision-making of the coordinator, leading to the proposition of several query strategies using the agent's uncertainty measures given by the regression process. Extensive numerical experiments of a distributed sharing problem with quadratic cost functions for the agents have been conducted throughout this study. The results have demonstrated that the various algorithms proposed have successfully achieved their primary goal of minimizing the overall communication overhead while ensuring that the global solutions maintain satisfactory levels of accuracy. The favorable accuracy observed in the numerical experiments is consistent with the findings of the derived convergence analysis. In instances where convergence proof is lacking, we have shown that the overall ADMM residual remains bounded by a diminishing threshold. This implies that we can anticipate our algorithmic solutions to closely approximate the actual solution, thus validating the reliability of our approaches. 
    more » « less