Deep Neural Networks (DNNs) have been widely applied in autonomous systems such as self-driving vehicles. Recently, DNN testing has been intensively studied to automatically generate adversarial examples, which inject small-magnitude perturbations into inputs to test DNNs under extreme situations. While existing testing techniques prove to be effective, particularly for autonomous driving, they mostly focus on generating digital adversarial perturbations, e.g., changing image pixels, which may never happen in the physical world. Thus, there is a critical missing piece in the literature on autonomous driving testing: understanding and exploiting both digital and physical adversarial perturbation generation for impacting steering decisions. In this paper, we propose a systematic physical-world testing approach, namely DeepBillboard, targeting at a quite common and practical driving scenario: drive-by billboards. DeepBillboard is capable of generating a robust and resilient printable adversarial billboard test, which works under dynamic changing driving conditions including viewing angle, distance, and lighting. The objective is to maximize the possibility, degree, and duration of the steering-angle errors of an autonomous vehicle driving by our generated adversarial billboard. We have extensively evaluated the efficacy and robustness of DeepBillboard by conducting both experiments with digital perturbations and physical-world case studies. The digital experimental results show that DeepBillboard is effective for various steering models and scenes. Furthermore, the physical case studies demonstrate that DeepBillboard is sufficiently robust and resilient for generating physical-world adversarial billboard tests for real-world driving under various weather conditions, being able to mislead the average steering angle error up to 26.44 degrees. To the best of our knowledge, this is the first study demonstrating the possibility of generating realistic and continuous physical-world tests for practical autonomous driving systems; moreover, DeepBillboard can be directly generalized to a variety of other physical entities/surfaces along the curbside, e.g., a graffiti painted on a wall.
more »
« less
This content will become publicly available on March 24, 2026
Real-time Adversarial Image Perturbations for Autonomous Vehicles using Reinforcement Learning
The deep neural network (DNN) model for computer vision tasks (object detection and classification) is widely used in autonomous vehicles, such as driverless cars and unmanned aerial vehicles. However, DNN models are shown to be vulnerable to adversarial image perturbations. The generation of adversarial examples against inferences of DNNs has been actively studied recently. The generation typically relies on optimizations taking an entire image frame as the decision variable. Hence, given a new image, the computationally expensive optimization needs to start over as there is no learning between the independent optimizations. Very few approaches have been developed for attacking online image streams while taking into account the underlying physical dynamics of autonomous vehicles, their mission, and the environment. The article presents a multi-level reinforcement learning framework that can effectively generate adversarial perturbations to misguide autonomous vehicles’ missions. In the existing image attack methods against autonomous vehicles, optimization steps are repeated for every image frame. This framework removes the need for fully converged optimization at every frame. Using multi-level reinforcement learning, we integrate a state estimator and a generative adversarial network that generates the adversarial perturbations. Due to the reinforcement learning agent consisting of state estimator, actor, and critic that only uses image streams, the proposed framework can misguide the vehicle to increase the adversary’s reward without knowing the states of the vehicle and the environment. Simulation studies and a robot demonstration are provided to validate the proposed framework’s performance.
more »
« less
- Award ID(s):
- 2137753
- PAR ID:
- 10585386
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Cyber-Physical Systems
- Volume:
- 9
- Issue:
- 2
- ISSN:
- 2378-962X
- Page Range / eLocation ID:
- 1 to 24
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.more » « less
-
A framework for autonomous waypoint planning, trajectory generation through waypoints, and trajectory tracking for multi-rotor unmanned aerial vehicles (UAVs) is proposed in this work. Safe and effective operations of these UAVs is a problem that demands obstacle avoidance strategies and advanced trajectory planning and control schemes for stability and energy efficiency. To address this problem, a two-level optimization strategy is used for trajectory generation, then the trajectory is tracked in a stable manner. The framework given here consists of the following components: (a) a deep reinforcement learning (DRL)-based algorithm for optimal waypoint planning while minimizing control energy and avoiding obstacles in a given environment; (b) an optimal, smooth trajectory generation algorithm through waypoints, that minimizes a combinaton of velocity, acceleration, jerk and snap; and (c) a stable tracking control law that determines a control thrust force for an UAV to track the generated trajectory.more » « less
-
Habli, Ibrahim; Sujan, Mark; Bitsch, Friedemann (Ed.)We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations such as blur, haze, and changes in image contrast. While the robustness of DNN classifiers has been the subject of intense research in recent years, the solutions delivered by this research focus on verifying DNN robustness to small perturbations in the images being classified, with perturbation magnitude measured using established 𝐿𝑝 norms. This is useful for identifying potential adversarial attacks on DNN image classifiers, but cannot verify DNN robustness to contextually relevant image perturbations, which are typically not small when expressed with 𝐿𝑝 norms. DeepCert addresses this underexplored verification problem by supporting: (1) the encoding of real-world image perturbations; (2) the systematic evaluation of contextually relevant DNN robustness, using both testing and formal verification; (3) the generation of contextually relevant counterexamples; and, through these, (4) the selection of DNN image classifiers suitable for the operational context (i) envisaged when a potentially safety-critical system is designed, or (ii) observed by a deployed system. We demonstrate the effectiveness of DeepCert by showing how it can be used to verify the robustness of DNN image classifiers build for two benchmark datasets (‘German Traffic Sign’ and ‘CIFAR-10’) to multiple contextually relevant perturbations.more » « less
-
Multi-agent autonomous racing is a challenging problem for autonomous vehicles due to the split-second, and complex decisions that vehicles must continuously make during a race. The presence of other agents on the track requires continuous monitoring of the ego vehicle’s surroundings, and necessitates predicting the behavior of other vehicles so the ego can quickly react to a changing environment with informed decisions. In our previous work we have developed the DeepRacing AI framework for autonomous formula one racing. Our DeepRacing framework was the first implementation to use the highly photorealisitc Formula One game as a simulation testbed for autonomous racing. We have successfully demonstrated single agent high speed autonomous racing using Bezier curve trajectories. In this paper, we extend the capabilities of the DeepRacing framework towards multi-agent autonomous racing. To do so, we first develop and learn a virtual camera model from game data that the user can configure to emulate the presence of a camera sensor on the vehicle. Next we propose and train a deep recurrent neural network that can predict the future poses of opponent agents in the field of view of the virtual camera using vehicles position, velocity, and heading data with respect to the ego vehicle racecar. We demonstrate early promising results for both these contributions in the game. These added features will extend the DeepRacing framework to become more suitable for multi-agent autonomous racing algorithm developmentmore » « less
An official website of the United States government
