skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Semantic Adversarial Deep Learning
Fueled by massive amounts of data, models produced bymachine-learning (ML) algorithms, especially deep neural networks, arebeing used in diverse domains where trustworthiness is a concern, includ-ing automotive systems, finance, health care, natural language process-ing, and malware detection. Of particular concern is the use of ML algo-rithms in cyber-physical systems (CPS), such as self-driving cars andaviation, where an adversary can cause serious consequences.However, existing approaches to generating adversarial examples anddevising robust ML algorithms mostly ignore thesemanticsandcon-textof the overall system containing the ML component. For example,in an autonomous vehicle using deep learning for perception, not everyadversarial example for the neural network might lead to a harmful con-sequence. Moreover, one may want to prioritize the search for adversarialexamples towards those that significantly modify the desired semanticsof the overall system. Along the same lines, existing algorithms for con-structing robust ML algorithms ignore the specification of the overallsystem. In this paper, we argue that the semantics and specification ofthe overall system has a crucial role to play in this line of research. Wepresent preliminary research results that support this claim.  more » « less
Award ID(s):
1804648 1836978
PAR ID:
10111273
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Computer Aided Verification
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Adaptive bitrate (ABR) algorithms aim to make optimal bitrate de- cisions in dynamically changing network conditions to ensure a high quality of experience (QoE) for the users during video stream- ing. However, most of the existing ABRs share the limitations of predefined rules and incorrect assumptions about streaming pa- rameters. They also come short to consider the perceived quality in their QoE model, target higher bitrates regardless, and ignore the corresponding energy consumption. This joint approach results in additional energy consumption and becomes a burden, especially for mobile device users. This paper proposes GreenABR, a new deep reinforcement learning-based ABR scheme that optimizes the energy consumption during video streaming without sacrificing the user QoE. GreenABR employs a standard perceived quality metric, VMAF, and real power measurements collected through a streaming application. GreenABR’s deep reinforcement learning model makes no assumptions about the streaming environment and learns how to adapt to the dynamically changing conditions in a wide range of real network scenarios. GreenABR outperforms the existing state-of-the-art ABR algorithms by saving up to 57% in streaming energy consumption and 60% in data consumption while achieving up to 22% more perceptual QoE due to up to 84% less rebuffering time and near-zero capacity violations. 
    more » « less
  2. Machine Learning (ML) algorithms are increasingly used in our daily lives, yet often exhibit discrimination against protected groups. In this talk, I discuss the growing concern of bias in ML and overview existing approaches to address fairness issues. Then, I present three novel approaches developed by my research group. The first leverages generative AI to eliminate biases in training datasets, the second tackles non-convex problems arise in fair learning, and the third introduces a matrix decomposition-based post-processing approach to identify and eliminate unfair model components. 
    more » « less
  3. Existing adversarial algorithms for Deep Reinforcement Learning (DRL) have largely focused on identifying an optimal time to attack a DRL agent. However, little work has been explored in injecting efficient adversarial perturbations in DRL environments. We propose a suite of novel DRL adversarial attacks, called ACADIA, representing AttaCks Against Deep reInforcement leArning. ACADIA provides a set of efficient and robust perturbation-based adversarial attacks to disturb the DRL agent's decision-making based on novel combinations of techniques utilizing momentum, ADAM optimizer (i.e., Root Mean Square Propagation, or RMSProp), and initial randomization. These kinds of DRL attacks with novel integration of such techniques have not been studied in the existing Deep Neural Networks (DNNs) and DRL research. We consider two well-known DRL algorithms, Deep-Q Learning Network (DQN) and Proximal Policy Optimization (PPO), under Atari games and MuJoCo where both targeted and non-targeted attacks are considered with or without the state-of-the-art defenses in DRL (i.e., RADIAL and ATLA). Our results demonstrate that the proposed ACADIA outperforms existing gradient-based counterparts under a wide range of experimental settings. ACADIA is nine times faster than the state-of-the-art Carlini & Wagner (CW) method with better performance under defenses of DRL. 
    more » « less
  4. Thanks to the numerous machine learning based malware detection (MLMD) research in recent years and the readily available online malware scanning system (e.g., VirusTotal), it becomes relatively easy to build a seemingly successful MLMD system using the following standard procedure: first prepare a set of ground truth data by checking with VirusTotal, then extract features from training dataset and build a machine learning detection model, and finally evaluate the model with a disjoint testing dataset. We argue that such evaluation methods do not expose the real utility of ML based malware detection in practice since the ML model is both built and tested on malware that are known at the time of training. The user could simply run them through VirusTotal just as how the researchers obtained the ground truth, instead of using the more sophisticated ML approach. However, ML based malware detection has the potential of identifying malware that has not been known at the time of training, which is the real value ML brings to this problem. We present experimentation study on how well a machine learning based malware detection system can achieve this. Our experiments showed that MLMD can consistently generate previously unknown malware knowledge, e.g., malware that is not detectable by existing malware detection systems at MLMD’s training time. Our research illustrates an ideal usage scenario for MLMD systems and demonstrates that such systems can benefit malware detection in practice. For example, by utilizing the new signals provided by the MLMD system and the detection capability of existing malware detection systems, we can more quickly uncover new malware variants or families. 
    more » « less
  5. Artificial Intelligence (AI) bots receive much attention and usage in industry manufacturing and even store cashier applications. Our research is to train AI bots to be software engineering assistants, specifically to detect biases and errors inside AI software applications. An example application is an AI machine learning system that sorts and classifies people according to various attributes, such as the algorithms involved in criminal sentencing, hiring, and admission practices. Biases, unfair decisions, and flaws in terms of the equity, diversity, and justice presence, in such systems could have severe consequences. As a Hispanic-Serving Institution, we are concerned about underrepresented groups and devoted an extended amount of our time to implementing “An Assure AI” (AAAI) Bot to detect biases and errors in AI applications. Our state-of-the-art AI Bot was developed based on our previous accumulated research in AI and Deep Learning (DL). The key differentiator is that we are taking a unique approach: instead of cleaning the input data, filtering it out and minimizing its biases, we trained our deep Neural Networks (NN) to detect and mitigate biases of existing AI models. The backend of our bot uses the Detection Transformer (DETR) framework, developed by Facebook, 
    more » « less