skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Reliability and Security of AI Hardware
In recent years, Artificial Intelligence (AI) systems have achieved revolutionary capabilities, providing intelligent solutions that surpass human skills in many cases. However, such capabilities come with power-hungry computation workloads. Therefore, the implementation of hardware acceleration becomes as fundamental as the software design to improve energy efficiency, silicon area, and latency of AI systems. Thus, innovative hardware platforms, architectures, and compiler-level approaches have been used to accelerate AI workloads. Crucially, innovative AI acceleration platforms are being adopted in application domains for which dependability must be paramount, such as autonomous driving, healthcare, banking, space exploration, and industry 4.0. Unfortunately, the complexity of both AI software and hardware makes the dependability evaluation and improvement extremely challenging. Studies have been conducted on both the security and reliability of AI systems, such as vulnerability assessments and countermeasures to random faults and analysis for side-channel attacks. This paper describes and discusses various reliability and security threats in AI systems, and presents representative case studies along with corresponding efficient countermeasures.  more » « less
Award ID(s):
1902532
PAR ID:
10523760
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Format(s):
Medium: X
Location:
The Hague, Netherlands
Sponsoring Org:
National Science Foundation
More Like this
  1. Recently, with the advent of the Internet of everything and 5G network, the amount of data generated by various edge scenarios such as autonomous vehicles, smart industry, 4K/8K, virtual reality (VR), augmented reality (AR), etc., has greatly exploded. All these trends significantly brought real-time, hardware dependence, low power consumption, and security requirements to the facilities, and rapidly popularized edge computing. Meanwhile, artificial intelligence (AI) workloads also changed the computing paradigm from cloud services to mobile applications dramatically. Different from wide deployment and sufficient study of AI in the cloud or mobile platforms, AI workload performance and their resource impact on edges have not been well understood yet. There lacks an in-depth analysis and comparison of their advantages, limitations, performance, and resource consumptions in an edge environment. In this paper, we perform a comprehensive study of representative AI workloads on edge platforms. We first conduct a summary of modern edge hardware and popular AI workloads. Then we quantitatively evaluate three categories (i.e., classification, image-to-image, and segmentation) of the most popular and widely used AI applications in realistic edge environments based on Raspberry Pi, Nvidia TX2, etc. We find that interaction between hardware and neural network models incurs non-negligible impact and overhead on AI workloads at edges. Our experiments show that performance variation and difference in resource footprint limit availability of certain types of workloads and their algorithms for edge platforms, and users need to select appropriate workload, model, and algorithm based on requirements and characteristics of edge environments. 
    more » « less
  2. null (Ed.)
    With the deployment of artificial intelligent (AI) algorithms in a large variety of applications, there creates an increasing need for high-performance computing capabilities. As a result, different hardware platforms have been utilized for acceleration purposes. Among these hardware-based accelerators, the field-programmable gate arrays (FPGAs) have gained a lot of attention due to their re-programmable characteristics, which provide customized control logic and computing operators. For example, FPGAs have recently been adopted for on-demand cloud services by the leading cloud providers like Amazon and Microsoft, providing acceleration for various compute-intensive tasks. While the co-residency of multiple tenants on a cloud FPGA chip increases the efficiency of resource utilization, it also creates unique attack surfaces that are under-explored. In this paper, we exploit the vulnerability associated with the shared power distribution network on cloud FPGAs. We present a stealthy power attack that can be remotely launched by a malicious tenant, shutting down the entire chip and resulting in denial-of-service for other co-located benign tenants. Specifically, we propose stealthy-shutdown: a well-timed power attack that can be implemented in two steps: (1) an attacker monitors the realtime FPGA power-consumption detected by ring-oscillator-based voltage sensors, and (2) when capturing high power-consuming moments, i.e., the power consumption by other tenants is above a certain threshold, she/he injects a well-timed power load to shut down the FPGA system. Note that in the proposed attack strategy, the power load injected by the attacker only accounts for a small portion of the overall power consumption; therefore, such attack strategy remains stealthy to the cloud FPGA operator. We successfully implement and validate the proposed attack on three FPGA evaluation kits with running real-world applications. The proposed attack results in a stealthy-shutdown, demonstrating severe security concerns of co-tenancy on cloud FPGAs. We also offer two countermeasures that can mitigate such power attacks. 
    more » « less
  3. Emerging Distributed AI systems are revolutionizing big data computing and data processing capabilities with growing economic and societal impact. However, recent studies have identified new attack surfaces and risks caused by security, privacy, and fairness issues in AI systems. In this paper, we review representative techniques, algorithms, and theoretical foundations for trustworthy distributed AI through robustness guarantee, privacy protection, and fairness awareness in distributed learning. We first provide a brief overview of alternative architectures for distributed learning, discuss inherent vulnerabilities for security, privacy, and fairness of AI algorithms in distributed learning, and analyze why these problems are present in distributed learning regardless of specific architectures. Then we provide a unique taxonomy of countermeasures for trustworthy distributed AI, covering (1) robustness to evasion attacks and irregular queries at inference, and robustness to poisoning attacks, Byzantine attacks, and irregular data distribution during training; (2) privacy protection during distributed learning and model inference at deployment; and (3) AI fairness and governance with respect to both data and models. We conclude with a discussion on open challenges and future research directions toward trustworthy distributed AI, such as the need for trustworthy AI policy guidelines, the AI responsibility-utility co-design, and incentives and compliance. 
    more » « less
  4. With the growing demand for enhanced performance and scalability in cloud applications and systems, data center architectures are evolving to incorporate heterogeneous computing fabrics that leverage CPUs, GPUs, and FPGAs. Unlike traditional processing platforms like CPUs and GPUs, FPGAs offer the unique ability for hardware reconfiguration at run-time, enabling improved and tailored performance, flexibility, and acceleration. FPGAs excel at executing large-scale search optimization, acceleration, and signal processing tasks while consuming low power and minimizing latency. Major public cloud providers, such as Amazon, Huawei, Microsoft, Alibaba, and others, have already begun integrating FPGA-based cloud acceleration services into their offerings. Although FPGAs in cloud applications facilitate customized hardware acceleration, they also introduce new security challenges that demand attention. Granting cloud users the capability to reconfigure hardware designs after deployment may create potential vulnerabilities for malicious users, thereby jeopardizing entire cloud platforms. In particular, multi-tenant FPGA services, where a single FPGA is divided spatially among multiple users, are highly vulnerable to such attacks. This paper examines the security concerns associated with multi-tenant cloud FPGAs, provides a comprehensive overview of the related security, privacy and trust issues, and discusses forthcoming challenges in this evolving field of study. 
    more » « less
  5. Openness and intelligence are two enabling features to be introduced in next generation wireless networks, for example, Beyond 5G and 6G, to support service heterogeneity, open hardware, optimal resource utilization, and on-demand service deployment. The open radio access network (O-RAN) is a promising RAN architecture to achieve both openness and intelligence through virtualized network elements and well-defined interfaces. While deploying artificial intelligence (AI) models is becoming easier in O-RAN, one significant challenge that has been long neglected is the comprehensive testing of their performance in realistic environments. This article presents a general automated, distributed and AI-enabled testing framework to test AI models deployed in O-RAN in terms of their decision-making performance, vulnerability and security. This framework adopts a master-actor architecture to manage a number of end devices for distributed testing. More importantly, it leverages AI to automatically and intelligently explore the decision space of AI models in O-RAN. Both software simulation testing and software-defined radio hardware testing are supported, enabling rapid proof of concept research and experimental research on wireless research platforms. 
    more » « less