skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, May 16 until 2:00 AM ET on Saturday, May 17 due to maintenance. We apologize for the inconvenience.


Title: AI for Security and Security for AI
Award ID(s):
1917117 2041770 2038483
PAR ID:
10252161
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
ACM Conference on Datta and Application Security and Privacy (CODASPY) 2021
Page Range / eLocation ID:
333 to 334
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In recent years, Artificial Intelligence (AI) systems have achieved revolutionary capabilities, providing intelligent solutions that surpass human skills in many cases. However, such capabilities come with power-hungry computation workloads. Therefore, the implementation of hardware acceleration becomes as fundamental as the software design to improve energy efficiency, silicon area, and latency of AI systems. Thus, innovative hardware platforms, architectures, and compiler-level approaches have been used to accelerate AI workloads. Crucially, innovative AI acceleration platforms are being adopted in application domains for which dependability must be paramount, such as autonomous driving, healthcare, banking, space exploration, and industry 4.0. Unfortunately, the complexity of both AI software and hardware makes the dependability evaluation and improvement extremely challenging. Studies have been conducted on both the security and reliability of AI systems, such as vulnerability assessments and countermeasures to random faults and analysis for side-channel attacks. This paper describes and discusses various reliability and security threats in AI systems, and presents representative case studies along with corresponding efficient countermeasures. 
    more » « less
  2. Several recent research efforts have proposed Machine Learning (ML)-based solutions that can detect complex patterns in network traffic for a wide range of network security problems. However, without understanding how these black-box models are making their decisions, network operators are reluctant to trust and deploy them in their production settings. One key reason for this reluctance is that these models are prone to the problem of underspecification, defined here as the failure to specify a model in adequate detail. Not unique to the network security domain, this problem manifests itself in ML models that exhibit unexpectedly poor behavior when deployed in real-world settings and has prompted growing interest in developing interpretable ML solutions (e.g., decision trees) for “explaining” to humans how a given black-box model makes its decisions. However, synthesizing such explainable models that capture a given black-box model’s decisions with high fidelity while also being practical (i.e., small enough in size for humans to comprehend) is challenging. In this paper, we focus on synthesizing high-fidelity and low-complexity decision trees to help network operators determine if their ML models suffer from the problem of underspecification. To this end, we present TRUSTEE, a framework that takes an existing ML model and training dataset generate a high-fidelity, easy-to-interpret decision tree, and associated trust report. Using published ML models that are fully reproducible, we show how practitioners can use TRUSTEE to identify three common instances of model underspecification, i.e., evidence of shortcut learning, spurious correlations, and vulnerability to out-of-distribution samples. 
    more » « less
  3. This survey paper provides an overview of the current state of Artificial Intelligence (AI) attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques. 
    more » « less
  4. This survey paper provides an overview of the current state of Artificial Intelligence (AI) attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques. 
    more » « less