In industrial applications, Machine Learning (ML) services are often deployed on cloud infrastructure and require a transfer of the input data over a network, which is susceptible to Quality of Service (QoS) degradation. In this paper we investigate the robustness of industrial ML classifiers towards varying Data Quality (DQ) due to degradation in network QoS. We define the robustness of an ML model as the ability to maintain a certain level of performance under variable levels of DQ at its input. We employ the classification accuracy as the performance metric for the ML classifiers studied. The POWDER testbed is utilized to create an experimental setup consisting of a real-world wireless network connecting two nodes. We transfer multiple video and image files between the two nodes under varying degrees of packet loss and varying buffer sizes to create degraded data. We then evaluate the performance of AWS Rekognition, a commercial ML tool for on-demand object detection, on corrupted video and image data. We also evaluate the performance of YOLOv7 to compare the performance of a commercial and an open-source model. As a result we demonstrate that even a slight degree of packet loss, 1% for images and 0.2% for videos, can have a drastic impact on the classification performance of the system. We discuss the possible ways to make industrial ML systems more robust to network QoS degradation.
more »
« less
Are Industrial ML Image Classifiers Robust to Withstand Adversarial Attacks on Videos?
We investigate the impact of adversarial attacks against videos on the object detection and classification performance of industrial Machine Learning (ML) application. Specifically, we design the use case with the Intelligent Transportation System that processes real videos recorded by the vehicles’ dash cams and detects traffic lights and road signs in these videos. As the ML system, we employed Rekognition cloud service from Amazon, which is a commercial tool for on-demand object detection in the data of various modalities. To study Rekognition robustness to adversarial attacks, we manipulate the videos by adding the noise to them. We vary the intensity of the added noise by setting the ratio of randomly selected pixels affected by this noise. We then process the videos affected by the noise of various intensity and evaluate the performance demonstrated by Rekognition. As the evaluation metrics, we employ confidence scores provided by Rekognition, and the ratio of correct decisions that shows how successful is Rekognition in recognizing the patterns of interest in the frame. According to our results, even simple adversarial attacks of low intensity (up to 2% of the affected pixels in a single frame) result in a significant Rekognition performance decrease and require additional measures to improve the robustness and satisfy the industrial ML applications’ demands.
more »
« less
- Award ID(s):
- 2321652
- PAR ID:
- 10532535
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-2969-8
- Page Range / eLocation ID:
- 1 to 4
- Format(s):
- Medium: X
- Location:
- Rochester, NY, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
VehiGAN : Generative Adversarial Networks for Adversarially Robust V2X Misbehavior Detection SystemsVehicle-to-Everything (V2X) communication enables vehicles to communicate with other vehicles and roadside infrastructure, enhancing traffic management and improving road safety. However, the open and decentralized nature of V2X networks exposes them to various security threats, especially misbehaviors, necessitating a robust Misbehavior Detection System (MBDS). While Machine Learning (ML) has proved effective in different anomaly detection applications, the existing ML-based MBDSs have shown limitations in generalizing due to the dynamic nature of V2X and insufficient and imbalanced training data. Moreover, they are known to be vulnerable to adversarial ML attacks. On the other hand, Generative Adversarial Networks (GAN) possess the potential to mitigate the aforementioned issues and improve detection performance by synthesizing unseen samples of minority classes and utilizing them during their model training. Therefore, we propose the first application of GAN to design an MBDS that detects any misbehavior and ensures robustness against adversarial perturbation. In this article, we present several key contributions. First, we propose an advanced threat model for stealthy V2X misbehavior where the attacker can transmit malicious data and mask it using adversarial attacks to avoid detection by ML-based MBDS. We formulate two categories of adversarial attacks against the anomaly-based MBDS. Later, in the pursuit of a generalized and robust GAN-based MBDS, we train and evaluate a diverse set of Wasserstein GAN (WGAN) models and presentVehicularGAN(VehiGAN), an ensemble of multiple top-performing WGANs, which transcends the limitations of individual models and improves detection performance. We present a physics-guided data preprocessing technique that generates effective features for ML-based MBDS. In the evaluation, we leverage the state-of-the-art V2X attack simulation tool VASP to create a comprehensive dataset of V2X messages with diverse misbehaviors. Evaluation results show that in 20 out of 35 misbehaviors,VehiGANoutperforms the baseline and exhibits comparable detection performance in other scenarios. Particularly,VehiGANexcels in detecting advanced misbehaviors that manipulate multiple fields in V2X messages simultaneously, replicating unique maneuvers. Moreover,VehiGANprovides approximately 92% improvement in false positive rate under powerful adaptive adversarial attacks, and possesses intrinsic robustness against other adversarial attacks that target the false negative rate. Finally, we make the data and code available for reproducibility and future benchmarking, available athttps://github.com/shahriar0651/VehiGAN.more » « less
-
The burgeoning fields of machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems across various domains. However, their susceptibility to adversarial attacks raises concerns when deploying these systems in security-sensitive applications. In this study, we present a comparative analysis of the vulnerability of ML and QML models, specifically conventional neural networks (NN) and quantum neural networks (QNN), to adversarial attacks using a malware dataset. We utilize a software supply chain attack dataset known as ClaMP and develop two distinct models for QNN and NN, employing Pennylane for quantum implementations and TensorFlow and Keras for traditional implementations. Our methodology involves crafting adversarial samples by introducing random noise to a small portion of the dataset and evaluating the impact on the models’ performance using accuracy, precision, recall, and F1 score metrics. Based on our observations, both ML and QML models exhibit vulnerability to adversarial attacks. While the QNN’s accuracy decreases more significantly compared to the NN after the attack, it demonstrates better performance in terms of precision and recall, indicating higher resilience in detecting true positives under adversarial conditions. We also find that adversarial samples crafted for one model type can impair the performance of the other, highlighting the need for robust defense mechanisms. Our study serves as a foundation for future research focused on enhancing the security and resilience of ML and QML models, particularly QNN, given its recent advancements. A more extensive range of experiments will be conducted to better understand the performance and robustness of both models in the face of adversarial attacks.more » « less
-
Several attacks have been proposed against autonomous vehicles and their subsystems that are powered by machine learning (ML). Road sign recognition models are especially heavily tested under various adversarial ML attack settings, and they have proven to be vulnerable. Despite the increasing research on adversarial ML attacks against road sign recognition models, there is little to no focus on defending against these attacks. In this paper, we propose the first defense method specifically designed for autonomous vehicles to detect adversarial ML attacks targeting road sign recognition models, which is called ViLAS (Vision-Language Model for Adversarial Traffic Sign Detection). The proposed defense method is based on a custom, fast, lightweight, and salable vision-language model (VLM) and is compatible with any existing traffic sign recognition system. Thanks to the orthogonal information coming from the class label text data through the language model, ViLAS leverages image context in addition to visual data for highly effective attack detection performance. In our extensive experiments, we show that our method consistently detects various attacks against different target models with high true positive rates while satisfying very low false positive rates. When tested against four state-of-the-art attacks targeting four popular action recognition models, our proposed detector achieves an average AUC of 0.94. This result achieves a 25.3% improvement over a state-of-the-art defense method proposed for generic image attack detection, which attains an average AUC of 0.75. We also show that our custom VLM is more suitable for an autonomous vehicle compared to the popular off-the-shelf VLM and CLIP in terms of speed (4.4 vs. 9.3 milliseconds), space complexity (0.36 vs. 1.6 GB), and performance (0.94 vs. 0.43 average AUC).more » « less
-
null (Ed.)A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs. In this work, we perform the first study to analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks and challenge the popular belief that the use of additional sensors automatically mitigate the risk of adversarial attacks. We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks including disappearance, universal patch, and spoofing. After identifying the underlying reason, we explore some potential defenses and provide some recommendations for improved sensor fusion models.more » « less
An official website of the United States government

