skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Study of AI Object Detection: Patterns on Animals with YOLO and Adversarial Patches
In this paper, we document our findings from previous research and literature related to adversarial examples and object detection. Artificial Intelligence (AI) is an increasingly powerful tool in various fields, particularly in image classification and object detection. As AI becomes more advanced, new methods to deceive machine learning models, such as adversarial patches, have emerged. These subtle modifications to images can cause AI models to misclassify objects, posing a significant challenge to their reliability. This research builds upon our earlier work by investigating how small patches affect object detection on YOLOv8. Last year, we explored patterns within images and their impact on model accuracy. This study extends that work by testing how adversarial patches, particularly those targeting animal patterns, affect YOLOv8's ability to accurately detect objects. We also explore how untrained patterns influence the model’s performance, aiming to identify weaknesses and improve the robustness of object detection systems.  more » « less
Award ID(s):
1754054 2131255
PAR ID:
10623598
Author(s) / Creator(s):
; ;
Publisher / Repository:
The 2025 ADMI Symposium.
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The benefits of utilizing spatial context in fast object detection algorithms have been studied extensively. Detectors increase inference speed by doing a single forward pass per image which means they implicitly use contextual reasoning for their predictions. However, one can show that an adversary can design adversarial patches which do not overlap with any objects of interest in the scene and exploit con- textual reasoning to fool standard detectors. In this paper, we examine this problem and design category specific adversarial patches which make a widely used object detector like YOLO blind to an attacker chosen object category. We also show that limiting the use of spatial context during object detector training improves robustness to such adversaries. We believe the existence of context based adversarial attacks is concerning since the adversarial patch can affect predictions without being in vicinity of any objects of interest. Hence, defending against such attacks becomes challenging and we urge the research community to give attention to this vulnerability. 
    more » « less
  2. With the increased use of machine learning models, there is a need to understand how machine learning models can be maliciously targeted. Understanding how these attacks are ‘enacted’ helps in being able to ‘harden’ models so that it is harder for attackers to evade detection. We want to better understand object detection, the underlying algorithms, different perturbation approaches that can be utilized to fool these models. To this end, we document our findings as a review of existing literature and open-source repositories related to Computer Vision and Object Detection. We also look at how Adversarial Patches impact object detection algorithms. Our objective was to replicate existing processes in order to reproduce results to further our research on adversarial patches. 
    more » « less
  3. With the increased use of machine learning models, there is a need to understand how machine learning models can be maliciously targeted. Understanding how these attacks are ‘enacted’ helps in being able to ‘harden’ models so that it is harder for attackers to evade detection. We want to better understand object detection, the underlying algorithms, different perturbation approaches that can be utilized to fool these models. To this end, we document our findings as a review of existing literature and open-source repositories related to Computer Vision and Object Detection. We also look at how Adversarial Patches impact object detection algorithms. Our objective was to replicate existing processes in order to reproduce results to further our research on adversarial patches. 
    more » « less
  4. null (Ed.)
    Due to the importance of object detection in video analysis and image annotation, it is widely utilized in a number of computer vision tasks such as face recognition, autonomous vehicles, activity recognition, tracking objects and identity verification. Object detection does not only involve classification and identification of objects within images, but also involves localizing and tracing the objects by creating bounding boxes around the objects and labelling them with their respective prediction scores. Here, we leverage and discuss how connected vision systems can be used to embed cameras, image processing, Edge Artificial Intelligence (AI), and data connectivity capabilities for flood label detection. We favored the engineering definition of label detection that a label is a sequence of discrete measurable observations obtained using a capturing device such as web cameras, smart phone, etc. We built a Big Data service of around 1000 images (image annotation service) including the image geolocation information from various flooding events in the Carolinas (USA) with a total of eight different object categories. Our developed platform has several smart AI tools and task configurations that can detect objects’ edges or contours which can be manually adjusted with a threshold setting so as to best segment the image. The tool has the ability to train the dataset and predict the labels for large scale datasets which can be used as an object detector to drastically reduce the amount of time spent per object particularly for real-time image-based flood forecasting. This research is funded by the US National Science Foundation (NSF). 
    more » « less
  5. Deep neural networks (DNNs) and generative AI (GenAI) are increasingly vulnerable to backdoor attacks, where adversaries embed triggers into inputs to cause models to misclassify or misinterpret target labels. Beyond traditional single-trigger scenarios, attackers may inject multiple triggers across various object classes, forming unseen backdoor-object configurations that evade standard detection pipelines. In this paper, we introduce DBOM (Disentangled Backdoor-Object Modeling), a proactive framework that leverages structured disentanglement to identify and neutralize both seen and unseen backdoor threats at the dataset level. Specifically, DBOM factorizes input image representations by modeling triggers and objects as independent primitives in the embedding space through the use of Vision-Language Models (VLMs). By leveraging the frozen, pre-trained encoders of VLMs, our approach decomposes the latent representations into distinct components through a learnable visual prompt repository and prompt prefix tuning, ensuring that the relationships between triggers and objects are explicitly captured. To separate trigger and object representations in the visual prompt repository, we introduce the trigger–object separation and diversity losses that aids in disentangling trigger and object visual features. Next, by aligning image features with feature decomposition and fusion, as well as learned contextual prompt tokens in a shared multimodal space, DBOM enables zero-shot generalization to novel trigger-object pairings that were unseen during training, thereby offering deeper insights into adversarial attack patterns. Experimental results on CIFAR-10 and GTSRB demonstrate that DBOM robustly detects poisoned images prior to downstream training, significantly enhancing the security of DNN training pipelines. 
    more » « less