Emerging Distributed AI systems are revolutionizing big data computing and data processing capabilities with growing economic and societal impact. However, recent studies have identified new attack surfaces and risks caused by security, privacy, and fairness issues in AI systems. In this paper, we review representative techniques, algorithms, and theoretical foundations for trustworthy distributed AI through robustness guarantee, privacy protection, and fairness awareness in distributed learning. We first provide a brief overview of alternative architectures for distributed learning, discuss inherent vulnerabilities for security, privacy, and fairness of AI algorithms in distributed learning, and analyze why these problems are present in distributed learning regardless of specific architectures. Then we provide a unique taxonomy of countermeasures for trustworthy distributed AI, covering (1) robustness to evasion attacks and irregular queries at inference, and robustness to poisoning attacks, Byzantine attacks, and irregular data distribution during training; (2) privacy protection during distributed learning and model inference at deployment; and (3) AI fairness and governance with respect to both data and models. We conclude with a discussion on open challenges and future research directions toward trustworthy distributed AI, such as the need for trustworthy AI policy guidelines, the AI responsibility-utility co-design, and incentives and compliance.
- PAR ID:
- 10315112
- Date Published:
- Journal Name:
- 8th NSysS 2021: 8th International Conference on Networking, Systems and Security
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Hardware security creates a hardware-based security foundation for secure and reliable operation of systems and applications used in our modern life. The presence of design for security, security assurance, and general security design life cycle practices in product life cycle of many large semiconductor design and manufacturing companies these days indicates that the importance of hardware security has been very well observed in industry. However, the high cost, time, and effort for building security into designs and assuring their security - due to using many manual processes - is still an important obstacle for economy of secure product development. This paper presents several promising directions for automation of design for security and security assurance practices to reduce the overall time and cost of secure product development. First, we present security verification challenges of SoCs, possible vulnerabilities that could be introduced inadvertently by tools mapping a design model in one level of abstraction to its lower level, and our solution to the problem by automatically mapping security properties from one level to its lower level incorporating techniques for extension and expansion of the properties. Then, we discuss the foundation necessary for further automation of formal security analysis of a design by incorporating threat model and common security vulnerabilities into an intermediate representation of a hardware model to be used to automatically determine if there is a chance for direct or indirect flow of information to compromise confidentiality or integrity of security assets. Finally, we discuss a pre-silicon-based framework for practical and time-and-cost effective power-side channel leakage analysis, root-causing the side-channel leakage by using the automatically generated leakage profile of circuit nodes, providing insight to mitigate the side-channel leakage by addressing the high leakage nodes, and assuring the effectiveness of the mitigation by reprofiling the leakage to prove its acceptable level of elimination. We hope that sharing these efforts and ideas with the security research community can accelerate the evolution of security-aware CAD tools targeted to design for security and security assurance to enrich the ecosystem to have tools from multiple vendors with more capabilities and higher performance.more » « less
-
Abstract This paper highlights the overall endeavors of the NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI‐EDGE) to create a research, education, knowledge transfer, and workforce development environment for developing technological leadership in next‐generation edge networks (6G and beyond) and artificial intelligence (AI). The research objectives of AI‐EDGE are twofold: “AI for Networks” and “Networks for AI.” The former develops new foundational AI techniques to revolutionize technologies for next‐generation edge networks, while the latter develops advanced networking techniques to enhance distributed and interconnected AI capabilities at edge devices. These research investigations are conducted across eight symbiotic thrust areas that work together to address the main challenges towards those goals. Such a synergistic approach ensures a virtuous research cycle so that advances in one area will accelerate advances in the other, thereby paving the way for a new generation of networks that are not only intelligent but also efficient, secure, self‐healing, and capable of solving large‐scale distributed AI challenges. This paper also outlines the institute's endeavors in education and workforce development, as well as broadening participation and enforcing collaboration.
-
Abstract Although Internet routing security best practices have recently seen auspicious increases in uptake, Internet Service Providers (ISPs) have limited incentives to deploy them. They are operationally complex and expensive to implement and provide little competitive advantage. The practices with significant uptake protect only against origin hijacks, leaving unresolved the more general threat of path hijacks. We propose a new approach to improved routing security that achieves four design goals: improved incentive alignment to implement best practices; protection against path hijacks; expanded scope of such protection to customers of those engaged in the practices; and reliance on existing capabilities rather than needing complex new software in every participating router. Our proposal leverages an existing coherent core of interconnected ISPs to create a zone of trust, a topological region that protects not only all networks in the region, but all directly attached customers of those networks. Customers benefit from choosing ISPs committed to the practices, and ISPs thus benefit from committing to the practices. We discuss the concept of a zone of trust as a new, more pragmatic approach to security that improves security in a region of the Internet, as opposed to striving for global deployment. We argue that the aspiration for global deployment is unrealistic, since the global Internet includes malicious actors. We compare our approach to other schemes and discuss how a related proposal, ASPA, could be used to increase the scope of protection our scheme achieves. We hope this proposal inspires discussion of how the industry can make practical, measurable progress against the threat of route hijacks in the short term by leveraging institutionalized cooperation rooted in transparency and accountability.
-
Abstract Bioenergy is widely considered a sustainable alternative to fossil fuels. However, large‐scale applications of biomass‐based energy products are limited due to challenges related to feedstock variability, conversion economics, and supply chain reliability. Artificial intelligence (AI), an emerging concept, has been applied to bioenergy systems in recent decades to address those challenges. This paper reviewed 164 articles published between 2005 and 2019 that applied different AI techniques to bioenergy systems. This review focuses on identifying the unique capabilities of various AI techniques in addressing bioenergy‐related research challenges and improving the performance of bioenergy systems. Specifically, we characterized AI studies by their input variables, output variables, AI techniques, dataset size, and performance. We examined AI applications throughout the life cycle of bioenergy systems. We identified four areas in which AI has been mostly applied, including (1) the prediction of biomass properties, (2) the prediction of process performance of biomass conversion, including different conversion pathways and technologies, (3) the prediction of biofuel properties and the performance of bioenergy end‐use systems, and (4) supply chain modeling and optimization. Based on the review, AI is particularly useful in generating data that are hard to be measured directly, improving traditional models of biomass conversion and biofuel end‐uses, and overcoming the challenges of traditional computing techniques for bioenergy supply chain design and optimization. For future research, efforts are needed to develop standardized and practical procedures for selecting AI techniques and determining training data samples, to enhance data collection, documentation, and sharing across bioenergy‐related areas, and to explore the potential of AI in supporting the sustainable development of bioenergy systems from holistic perspectives.