skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Emerging Distributed AI systems are revolutionizing big data computing and data processing capabilities with growing economic and societal impact. However, recent studies have identified new attack surfaces and risks caused by security, privacy, and fairness issues in AI systems. In this paper, we review representative techniques, algorithms, and theoretical foundations for trustworthy distributed AI through robustness guarantee, privacy protection, and fairness awareness in distributed learning. We first provide a brief overview of alternative architectures for distributed learning, discuss inherent vulnerabilities for security, privacy, and fairness of AI algorithms in distributed learning, and analyze why these problems are present in distributed learning regardless of specific architectures. Then we provide a unique taxonomy of countermeasures for trustworthy distributed AI, covering (1) robustness to evasion attacks and irregular queries at inference, and robustness to poisoning attacks, Byzantine attacks, and irregular data distribution during training; (2) privacy protection during distributed learning and model inference at deployment; and (3) AI fairness and governance with respect to both data and models. We conclude with a discussion on open challenges and future research directions toward trustworthy distributed AI, such as the need for trustworthy AI policy guidelines, the AI responsibility-utility co-design, and incentives and compliance.  more » « less
Award ID(s):
2312758
PAR ID:
10515962
Author(s) / Creator(s):
;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Computing Surveys
ISSN:
0360-0300
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners' private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference. In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms. 
    more » « less
  2. Integrated sensing and communication (ISAC) is considered an emerging technology for 6th-generation (6G) wireless and mobile networks. It is expected to enable a wide variety of vertical applications, ranging from unmanned aerial vehicles (UAVs) detection for critical infrastructure protection to physiological sensing for mobile healthcare. Despite its significant socioeconomic benefits, ISAC technology also raises unique challenges in system security and user privacy. Being aware of the security and privacy challenges, understanding the trade-off between security and communication performance, and exploring potential countermeasures in practical systems are critical to a wide adoption of this technology in various application scenarios. This talk will discuss various security and privacy threats in emerging ISAC systems with a focus on communication-centric ISAC systems, that is, using the cellular or WiFi infrastructure for sensing. We will then examine potential mechanisms to secure ISAC systems and protect user privacy at the physical and data layers under different sensing modes. At the wireless physical (PHY) layer, an ISAC system is subject to both passive and active attacks, such as unauthorized passive sensing, unauthorized active sensing, signal spoofing, and jamming. Potential countermeasures include wireless channel/radio frequency (RF) environment obfuscation, waveform randomization, anti-jamming communication, and spectrum/RF monitoring. At the data layer, user privacy could be compromised during data collection, sharing, storage, and usage. For sensing systems powered by artificial intelligence (AI), user privacy could also be compromised during the model training and inference stages. An attacker could falsify the sensing data to achieve a malicious goal. Potential countermeasures include the application of privacy enhancing technologies (PETs), such as data anonymization, differential privacy, homomorphic encryption, trusted execution, and data synthesis. 
    more » « less
  3. This survey paper provides an overview of the current state of Artificial Intelligence (AI) attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques. 
    more » « less
  4. This survey paper provides an overview of the current state of Artificial Intelligence (AI) attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques. 
    more » « less
  5. Machine learning models are vulnerable to both security attacks (e.g., adversarial examples) and privacy attacks (e.g., private attribute inference). We take the first step to mitigate both the security and privacy attacks, and maintain task utility as well. Particularly, we propose an information-theoretic framework to achieve the goals through the lens of representation learning, i.e., learning representations that are robust to both adversarial examples and attribute inference adversaries. We also derive novel theoretical results under our framework, e.g., the inherent trade-off between adversarial robustness/utility and attribute privacy, and guaranteed attribute privacy leakage against attribute inference adversaries. 
    more » « less