skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 AM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: A Study of Privacy Preservation in Average Consensus Algorithm via Deterministic Obfuscation Signals
This article is a study on the use of additive obfuscation signals to keep the reference values of the agents in the continuous-time Laplacian average consensus algorithm private from eavesdroppers. Obfuscation signals are perturbations that agents add to their local dynamics and their transmitted-out messages to conceal their private reference values. An eavesdropper is an agent inside or outside the network that has access to some subset of the interagent communication messages, and its knowledge set also includes the network topology. Rather than focusing on using a zero-sum and vanishing additive signal, our work determines the necessary and sufficient conditions that define the set of admissible obfuscation signals that do not perturb the convergence point of the algorithm from the average of the reference values of the agents. Of theoretical interest, our results show that this class includes nonvanishing signals as well. Given this broader class of admissible obfuscation signals, we define a deterministic notion of privacy preservation. In this definition, privacy preservation for an agent means that neither the private reference value nor a finite set of values to which the private reference value of the agent belongs to can be obtained. Then, we evaluate the agents’ privacy against eavesdroppers with different knowledge sets.  more » « less
Award ID(s):
1653838
PAR ID:
10507030
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Control of Network Systems
Volume:
11
Issue:
1
ISSN:
2372-2533
Page Range / eLocation ID:
534 to 546
Subject(s) / Keyword(s):
Consensus algorithm, network systems, privacy preservation.
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we study the problem of privacy preservation of the continuous-time Laplacian static average consensus algorithm using additive perturbation signals. We consider this problem over a strongly connected and weight-balanced digraph. Starting from a local reference value, in static average consensus algorithm each agent constantly communicates with its neighboring agents to update its local state to compute the average of the reference values across the network. Since every agent transmits its local reference value to its in-neighbors, the reference value of the agents are trivially disclosed. In this paper, we investigate the possibility of preserving the privacy of the reference value of the agents by adding admissible perturbation signals to the local dynamics and the transmitted out signals of the agents. Admissible additive perturbation signals are those signals that do not perturb the final convergence point of the algorithm from the average of the reference values of the agents. Our results show that if an adversarial agent has access to the output of another agent and all the input signals transmitted to that agent, the adversary can discover the private reference value of that agent, regardless of the perturbation signals. Otherwise, the privacy of the agent can be preserved. We demonstrate our results through a numerical example. 
    more » « less
  2. A critical factor for expanding the adoption of networked solutions is ensuring local data privacy of in-network agents implementing a distributed algorithm. In this paper, we consider privacy preservation in the distributed optimization problem in the sense that local cost parameters should not be revealed. Current approaches to privacy preservation normally propose methods that sacrifice exact convergence or increase communication overhead. We propose PrivOpt, an intrinsically private distributed optimization algorithm that converges exponentially fast without any convergence error or using extra communication channels. We show that when the number of the parameters of the local cost is greater than the dimension of the decision variable of the problem, no malicious agent, even if it has access to all transmitted-in and -out messages in the network, can obtain local cost parameters of other agents. As an application study, we show how our proposed PrivOpt algorithm can be used to solve an optimal resource allocation problem with the guarantees that the local cost parameters of all the agents stay private. 
    more » « less
  3. In cooperative multi-agent reinforcement learning (Co-MARL), a team of agents must jointly optimize the team's longterm rewards to learn a designated task. Optimizing rewards as a team often requires inter-agent communication and data sharing, leading to potential privacy implications. We assume privacy considerations prohibit the agents from sharing their environment interaction data. Accordingly, we propose Privacy-Engineered Value Decomposition Networks (PE-VDN), a Co-MARL algorithm that models multi-agent coordination while provably safeguarding the confidentiality of the agents' environment interaction data. We integrate three privacy-engineering techniques to redesign the data flows of the VDN algorithm-an existing Co-MARL algorithm that consolidates the agents' environment interaction data to train a central controller that models multi-agent coordination-and develop PE-VDN. In the first technique, we design a distributed computation scheme that eliminates Vanilla VDN's dependency on sharing environment interaction data. Then, we utilize a privacy-preserving multi-party computation protocol to guar-antee that the data flows of the distributed computation scheme do not pose new privacy risks. Finally, we enforce differential privacy to preempt inference threats against the agents' training data-past environment interactions-when they take actions based on their neural network predictions. We implement PE-VDN in StarCraft Multi-Agent Competition (SMAC) and show that it achieves 80% of Vanilla VDN's win rate while maintaining differential privacy levels that provide meaningful privacy guarantees. The results demonstrate that PE-VDN can safeguard the confidentiality of agents' environment interaction data without sacrificing multi-agent coordination. 
    more » « less
  4. null (Ed.)
    As multi-agent systems proliferate, there is in-creasing demand for coordination protocols that protect agents’ sensitive information while allowing them to collaborate. To help address this need, this paper presents a differentially private formation control framework. Agents’ state trajectories are protected using differential privacy, which is a statistical notion of privacy that protects data by adding noise to it. We provide a private formation control implementation and analyze the impact of privacy upon the system. Specifically, we quantify tradeoffs between privacy level, system performance, and connectedness of the network’s communication topology. These tradeoffs are used to develop guidelines for calibrating privacy in terms of control theoretic quantities, such as steady-state error, without requiring in-depth knowledge of differential privacy. Additional guidelines are also developed for treating privacy levels and network topologies as design parameters to tune the network’s performance. Simulation results illustrate these tradeoffs and show that strict privacy is inherently compatible with strong system performance. 
    more » « less
  5. null (Ed.)
    This article considers resilient cooperative state estimation in unreliable multiagent networks. A network of agents aim to collaboratively estimate the value of an unknown vector parameter, while an unknown subset of agents suffer Byzantine faults. We refer to the faulty agents as Byzantine agents. Byzantine agents malfunction arbitrarily and may send out highly unstructured messages to other agents in the network. As opposed to fault-free networks, reaching agreement in the presence of Byzantine agents is far from trivial. In this article, we propose a computationally efficient algorithm that is provably robust to Byzantine agents. At each iteration of the algorithm, a good agent performs a gradient descent update based on noisy local measurements, exchanges its update with other agents in its neighborhood, and robustly aggregates the received messages using coordinate-wise trimmed means. Under mild technical assumptions, we establish that good agents learn the true parameter asymptotically in almost sure sense. We further complement our analysis by proving (high probability) finite-time convergence rate, encapsulating network characteristics. 
    more » « less