Driving safety is a top priority for autonomous vehicles. Orthogonal to prior work handling accident-prone traffic events by algorithm designs at the policy level, we investigate a Closed-loop Adversarial Training (CAT) framework for safe end-to-end driving in this paper through the lens of environment augmentation. CAT aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios that are dynamically generated over time. A novel resampling technique is developed to turn log-replay real-world driving scenarios into safety-critical ones via probabilistic factorization, where the adversarial traffic generation is modeled as the multiplication of standard motion prediction sub-problems. Consequently, CAT can launch more efficient physical attacks compared to existing safety-critical scenario generation methods and yields a significantly less computational cost in the iterative learning pipeline. We incorporate CAT into the MetaDrive simulator and validate our approach on hundreds of driving scenarios imported from real-world driving datasets. Experimental results demonstrate that CAT can effectively generate adversarial scenarios countering the agent being trained. After training, the agent can achieve superior driving safety in both log-replay and safety-critical traffic scenarios on the held- out test set. Code and data are available at https://metadriverse.github.io/cat. 
                        more » 
                        « less   
                    This content will become publicly available on August 5, 2026
                            
                            Long-term Traffic Simulation with Interleaved Autoregressive Motion and Scenario Generation
                        
                    
    
            An ideal traffic simulator replicates the realistic long-term point-to-point trip that a self-driving system experiences during deployment. Prior models and benchmarks focus on closed-loop motion simulation for initial agents in a scene. This is problematic for long-term simulation. Agents enter and exit the scene as the ego vehicle enters new regions. We propose InfGen, a unified next-token prediction model that performs interleaved closed-loop motion simulation and scene generation. InfGen automatically switches between closed-loop motion simulation and scene generation mode. It enables stable long-term rollout simulation. InfGen performs at the state-of-the-art in short-term (9s) traffic simulation, and significantly outperforms all other methods in long-term (30s) simulation. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2505865
- PAR ID:
- 10631761
- Publisher / Repository:
- https://doi.org/10.48550/arXiv.2506.17213
- Date Published:
- ISSN:
- 2506.17213
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Open-sourced kinematic models of the da Vinci Surgical System have previously been developed using serial chains for forward and inverse kinematics. However, these models do not describe the motion of every link in the closed-loop mechanism of the da Vinci manipulators; knowing the kinematics of all components in motion is essential for the foundation of modeling the system dynamics and implementing representative simulations. This paper proposes a modeling method of the closed-loop kinematics, using the existing da Vinci kinematics and an optical motion capture link length calibration. Resulting link lengths and DH parameters are presented and used as the basis for ROS-based simulation models. The models were simulated in RViz visualization simulation and Gazebo dynamics simulation. Additionally, the closed-loop kinematic chain was verified by comparing the remote center of motion location of simulation with the hardware. Furthermore, the dynamic simulation resulted in satisfactory joint stability and performance. All models and simulations are provided as an open-source package.more » « less
- 
            Simulation forms the backbone of modern self-driving development. Simulators help develop, test, and improve driving systems without putting humans, vehicles, or their environment at risk. However, simulators face a major challenge: They rely on realistic, scalable, yet interesting content. While recent advances in rendering and scene reconstruction make great strides in creating static scene assets, modeling their layout, dynamics, and behaviors remains challenging. In this work, we turn to language as a source of supervision for dynamic traffic scene generation. Our model, LCTGen, combines a large language model with a transformer-based decoder architecture that selects likely map locations from a dataset of maps, and produces an initial traffic distribution, as well as the dynamics of each vehicle. LCTGen outperforms prior work in both unconditional and conditional traffic scene generation in terms of realism and fidelity.more » « less
- 
            Traditional traffic signal control focuses more on the optimization aspects whereas the stability and robustness of the closed-loop system are less studied. This paper aims to establish the stability properties of traffic signal control systems through the analysis of a practical model predictive control (MPC) scheme, which models the traffic network with the conservation of vehicles based on a store-and-forward model and attempts to balance the traffic densities. More precisely, this scheme guarantees the exponential stability of the closed-loop system under state and input constraints when the inflow is feasible and traffic demand can be fully accessed. Practical exponential stability is achieved in case of small uncertain traffic demand by a modification of the previous scheme. Simulation results of a small-scale traffic network validate the theoretical analysis.more » « less
- 
            Promising results have been achieved recently in category-level manipulation that generalizes across object instances. Nevertheless, it often requires expensive real-world data collection and manual specification of semantic keypoints for each object category and task. Additionally, coarse keypoint predictions and ignoring intermediate action sequences hinder adoption in complex manipulation tasks beyond pick-and-place. This work proposes a novel, category-level manipulation framework that leverages an object-centric, category-level representation and model-free 6 DoF motion tracking. The canonical object representation is learned solely in simulation and then used to parse a category-level, task trajectory from a single demonstration video. The demonstration is reprojected to a target trajectory tailored to a novel object via the canonical representation. During execution, the manipulation horizon is decomposed into longrange, collision-free motion and last-inch manipulation. For the latter part, a category-level behavior cloning (CatBC) method leverages motion tracking to perform closed-loop control. CatBC follows the target trajectory, projected from the demonstration and anchored to a dynamically selected category-level coordinate frame. The frame is automatically selected along the manipulation horizon by a local attention mechanism. This framework allows to teach different manipulation strategies by solely providing a single demonstration, without complicated manual programming. Extensive experiments demonstrate its efficacy in a range of challenging industrial tasks in highprecision assembly, which involve learning complex, long-horizon policies. The process exhibits robustness against uncertainty due to dynamics as well as generalization across object instances and scene configurations.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
