A critical factor for expanding the adoption of networked solutions is ensuring local data privacy of in-network agents implementing a distributed algorithm. In this paper, we consider privacy preservation in the distributed optimization problem in the sense that local cost parameters should not be revealed. Current approaches to privacy preservation normally propose methods that sacrifice exact convergence or increase communication overhead. We propose PrivOpt, an intrinsically private distributed optimization algorithm that converges exponentially fast without any convergence error or using extra communication channels. We show that when the number of the parameters of the local cost is greater than the dimension of the decision variable of the problem, no malicious agent, even if it has access to all transmitted-in and -out messages in the network, can obtain local cost parameters of other agents. As an application study, we show how our proposed PrivOpt algorithm can be used to solve an optimal resource allocation problem with the guarantees that the local cost parameters of all the agents stay private.
more »
« less
Distributed optimal resource allocation using transformed primal-dual method
We consider an in-network optimal resource allocation problem in which a group of agents interacting over a connected graph want to meet a demand while minimizing their collective cost. The contribution of this paper is to design a distributed continuous-time algorithm for this problem inspired by a recently developed first-order transformed primal-dual method. The solution applies to cluster-based setting where each agent may have a set of subagents, and its local cost is the sum of the cost of these subagents. The proposed algorithm guarantees an exponential convergence for strongly convex costs and asymptotic convergence for convex costs. Exponential convergence when the local cost functions are strongly convex is achieved even when the local gradients are only locally Lipschitz. For convex local cost functions, our algorithm guarantees asymptotic convergence to a point in the minimizer set. Through numerical examples, we show that our proposed algorithm delivers a faster convergence compared to existing distributed resource allocation algorithms.
more »
« less
- PAR ID:
- 10507031
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- American Control Conference
- ISBN:
- 979-8-3503-2806-6
- Page Range / eLocation ID:
- 198 to 203
- Format(s):
- Medium: X
- Location:
- San Diego, CA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper proposes a distributed solution for an optimal resource allocation problem with a time-varying cost function and time-varying demand. The objective is to minimize a global cost, which is the summation of local quadratic time-varying cost functions, by allocating time-varying resources. A reformulation of the original problem is developed and is solved in a distributed manner using only local interactions over an undirected connected graph. In the proposed algorithm, the local state trajectories converge to a bounded neighborhood of the optimal trajectory. This bound is characterized in terms the parameters of the cost and topology properties. We also show that despite the tracking error, the trajectories are feasible at all times, meaning that the resource allocation equality constraint is met at every execution time. Our algorithm also considers the possibility of some generators going out of production from time to time and adjusts the solution so that the remaining generators can meet the demands in an optimal manner. Numerical examples demonstrate our results.more » « less
-
Stochastic second-order methods accelerate local convergence in strongly convex optimization by using noisy Hessian estimates to precondition gradients. However, they typically achieve superlinear convergence only when Hessian noise diminishes, which increases per-iteration costs. Prior work [arXiv:2204.09266] introduced a Hessian averaging scheme that maintains low per-iteration cost while achieving superlinear convergence, but with slow global convergence, requiring 𝑂 ~ ( 𝜅 2 ) O ~ (κ 2 ) iterations to reach the superlinear rate of 𝑂 ~ ( ( 1 / 𝑡 ) 𝑡 / 2 ) O ~ ((1/t) t/2 ), where 𝜅 κ is the condition number. This paper proposes a stochastic Newton proximal extragradient method that improves these bounds, delivering faster global linear convergence and achieving the same fast superlinear rate in only 𝑂 ~ ( 𝜅 ) O ~ (κ) iterations. The method extends the Hybrid Proximal Extragradient (HPE) framework, yielding improved global and local convergence guarantees for strongly convex functions with access to a noisy Hessian oracle.more » « less
-
We propose a new primal-dual homotopy smoothing algorithm for a linearly constrained convex program, where neither the primal nor the dual function has to be smooth or strongly convex. The best known iteration complexity solving such a non-smooth problem is O(ε−1). In this paper, we show that by leveraging a local error bound condition on the dual function, the proposed algorithm can achieve a better primal convergence time of O ε−2/(2+β) log2(ε−1), where β ∈ (0, 1] is a local error bound parameter. As an example application of the general algorithm, we show that the distributed geometric median problem, which can be formulated as a constrained convex program, has its dual function non-smooth but satisfying the aforementioned local error bound condition with β = 1/2, therefore enjoying a convergence time of O ε−4/5 log2(ε−1). This result improves upon the O(ε−1) convergence time bound achieved by existing distributed optimization algorithms. Simulation experiments also demonstrate the performance of our proposed algorithm.more » « less
-
This work concerns the analysis and design of distributed first-order optimization algorithms over time-varying graphs. The goal of such algorithms is to optimize a global function that is the average of local functions using only local computations and communications. Several different algorithms have been proposed that achieve linear convergence to the global optimum when the local functions are strongly convex. We provide a unified analysis that yields the worst-case linear convergence rate as a function of the condition number of the local functions, the spectral gap of the graph, and the parameters of the algorithm. The framework requires solving a small semidefinite program whose size is fixed; it does not depend on the number of local functions or the dimension of their domain. The result is a computationally efficient method for distributed algorithm analysis that enables the rapid comparison, selection, and tuning of algorithms. Finally, we propose a new algorithm, which we call SVL, that is easily implementable and achieves a faster worst-case convergence rate than all other known algorithms.more » « less
An official website of the United States government

