Emerging Distributed AI systems are revolutionizing big data computing and data processing capabilities with growing economic and societal impact. However, recent studies have identified new attack surfaces and risks caused by security, privacy, and fairness issues in AI systems. In this paper, we review representative techniques, algorithms, and theoretical foundations for trustworthy distributed AI through robustness guarantee, privacy protection, and fairness awareness in distributed learning. We first provide a brief overview of alternative architectures for distributed learning, discuss inherent vulnerabilities for security, privacy, and fairness of AI algorithms in distributed learning, and analyze why these problems are present in distributed learning regardless of specific architectures. Then we provide a unique taxonomy of countermeasures for trustworthy distributed AI, covering (1) robustness to evasion attacks and irregular queries at inference, and robustness to poisoning attacks, Byzantine attacks, and irregular data distribution during training; (2) privacy protection during distributed learning and model inference at deployment; and (3) AI fairness and governance with respect to both data and models. We conclude with a discussion on open challenges and future research directions toward trustworthy distributed AI, such as the need for trustworthy AI policy guidelines, the AI responsibility-utility co-design, and incentives and compliance.
more »
« less
AI-powered Network Security: Approaches and Research Directions
Networks are today a critical infrastructure. Their resilience against attacks is thus crucial. Protecting networks requires a comprehensive security life-cycle and the deployment of different protection techniques. To make defenses more effective, recent solutions leverage AI techniques. In this paper, we discuss AI-based protection techniques, according to a security life-cycle consisting of several phases: (i) Prepare; (ii) Monitor and Diagnose; and (iii) React, Recovery and Fix. For each phase, we discuss relevant AI techniques, initial approaches, and research directions.
more »
« less
- PAR ID:
- 10315112
- Date Published:
- Journal Name:
- 8th NSysS 2021: 8th International Conference on Networking, Systems and Security
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Hardware security creates a hardware-based security foundation for secure and reliable operation of systems and applications used in our modern life. The presence of design for security, security assurance, and general security design life cycle practices in product life cycle of many large semiconductor design and manufacturing companies these days indicates that the importance of hardware security has been very well observed in industry. However, the high cost, time, and effort for building security into designs and assuring their security - due to using many manual processes - is still an important obstacle for economy of secure product development. This paper presents several promising directions for automation of design for security and security assurance practices to reduce the overall time and cost of secure product development. First, we present security verification challenges of SoCs, possible vulnerabilities that could be introduced inadvertently by tools mapping a design model in one level of abstraction to its lower level, and our solution to the problem by automatically mapping security properties from one level to its lower level incorporating techniques for extension and expansion of the properties. Then, we discuss the foundation necessary for further automation of formal security analysis of a design by incorporating threat model and common security vulnerabilities into an intermediate representation of a hardware model to be used to automatically determine if there is a chance for direct or indirect flow of information to compromise confidentiality or integrity of security assets. Finally, we discuss a pre-silicon-based framework for practical and time-and-cost effective power-side channel leakage analysis, root-causing the side-channel leakage by using the automatically generated leakage profile of circuit nodes, providing insight to mitigate the side-channel leakage by addressing the high leakage nodes, and assuring the effectiveness of the mitigation by reprofiling the leakage to prove its acceptable level of elimination. We hope that sharing these efforts and ideas with the security research community can accelerate the evolution of security-aware CAD tools targeted to design for security and security assurance to enrich the ecosystem to have tools from multiple vendors with more capabilities and higher performance.more » « less
-
Abstract Although Internet routing security best practices have recently seen auspicious increases in uptake, Internet Service Providers (ISPs) have limited incentives to deploy them. They are operationally complex and expensive to implement and provide little competitive advantage. The practices with significant uptake protect only against origin hijacks, leaving unresolved the more general threat of path hijacks. We propose a new approach to improved routing security that achieves four design goals: improved incentive alignment to implement best practices; protection against path hijacks; expanded scope of such protection to customers of those engaged in the practices; and reliance on existing capabilities rather than needing complex new software in every participating router. Our proposal leverages an existing coherent core of interconnected ISPs to create a zone of trust, a topological region that protects not only all networks in the region, but all directly attached customers of those networks. Customers benefit from choosing ISPs committed to the practices, and ISPs thus benefit from committing to the practices. We discuss the concept of a zone of trust as a new, more pragmatic approach to security that improves security in a region of the Internet, as opposed to striving for global deployment. We argue that the aspiration for global deployment is unrealistic, since the global Internet includes malicious actors. We compare our approach to other schemes and discuss how a related proposal, ASPA, could be used to increase the scope of protection our scheme achieves. We hope this proposal inspires discussion of how the industry can make practical, measurable progress against the threat of route hijacks in the short term by leveraging institutionalized cooperation rooted in transparency and accountability.more » « less
-
Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students.more » « less
-
Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students.more » « less
An official website of the United States government

