Training self-driving systems to be robust to the long-tail of driving scenarios is a critical problem. Model-based approaches leverage simulation to emulate a wide range of scenarios without putting users at risk in the real world. One promising path to faithful simulation is to train a forward model of the world to predict the future states of both the environment and the ego-vehicle given past states and a sequence of actions. In this paper, we argue that it is beneficial to model the state of the ego-vehicle, which often has simple, predictable and deterministic behavior, separately from the rest of the environment, which is much more complex and highly multimodal. We propose to model the ego-vehicle using a simple and differentiable kinematic model, while training a stochastic convolutional forward model on raster representations of the state to predict the behavior of the rest of the environment. We explore several configurations of such decoupled models, and evaluate their performance both with Model Predictive Control (MPC) and direct policy learning. We test our methods on the task of highway driving and demonstrate lower crash rates and better stability. The code is available at https://github.com/vladisai/pytorch-PPUU/tree/ICLR2022. 
                        more » 
                        « less   
                    
                            
                            Learning naturalistic driving environment with statistical realism
                        
                    
    
            Abstract For simulation to be an effective tool for the development and testing of autonomous vehicles, the simulator must be able to produce realistic safety-critical scenarios with distribution-level accuracy. However, due to the high dimensionality of real-world driving environments and the rarity of long-tail safety-critical events, how to achieve statistical realism in simulation is a long-standing problem. In this paper, we develop NeuralNDE, a deep learning-based framework to learn multi-agent interaction behavior from vehicle trajectory data, and propose a conflict critic model and a safety mapping network to refine the generation process of safety-critical events, following real-world occurring frequencies and patterns. The results show that NeuralNDE can achieve both accurate safety-critical driving statistics (e.g., crash rate/type/severity and near-miss statistics, etc.) and normal driving statistics (e.g., vehicle speed/distance/yielding behavior distributions, etc.), as demonstrated in the simulation of urban driving environments. To the best of our knowledge, this is the first time that a simulation model can reproduce the real-world driving environment with statistical realism, particularly for safety-critical situations. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2223517
- PAR ID:
- 10406361
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Nature Communications
- Volume:
- 14
- Issue:
- 1
- ISSN:
- 2041-1723
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Driver-assistance systems are becoming more commonplace; however, the realized safety benefits of these technologies depend on whether a person accepts and adopts automated driving aids. One challenge to adoption could be a preference-performance dissociation (PPD), which is a mismatch between a self-perceived desire and an objective need for assistance. Research has reported PPD in driving but has not extensively leveraged driving performance data to confirm its existence. Thus, the goal of this study was to compare drivers’ self-reported need for vehicle assistance to their objective driving performance. Twenty-one participants drove on a simulated road and traversed challenging, real-world roadway obstacles. Afterwards, they were asked about their preference for automated vehicle assistance (e.g., steering and braking) during their drive. Overall, some participants exhibited PPD that included both over- and underestimating their need for a particular type of automated assistance. Findings can be used to develop shared control and adaptive automation strategies tailored to particular users and contexts across various safety-critical environments.more » « less
- 
            Driving safety is a top priority for autonomous vehicles. Orthogonal to prior work handling accident-prone traffic events by algorithm designs at the policy level, we investigate a Closed-loop Adversarial Training (CAT) framework for safe end-to-end driving in this paper through the lens of environment augmentation. CAT aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios that are dynamically generated over time. A novel resampling technique is developed to turn log-replay real-world driving scenarios into safety-critical ones via probabilistic factorization, where the adversarial traffic generation is modeled as the multiplication of standard motion prediction sub-problems. Consequently, CAT can launch more efficient physical attacks compared to existing safety-critical scenario generation methods and yields a significantly less computational cost in the iterative learning pipeline. We incorporate CAT into the MetaDrive simulator and validate our approach on hundreds of driving scenarios imported from real-world driving datasets. Experimental results demonstrate that CAT can effectively generate adversarial scenarios countering the agent being trained. After training, the agent can achieve superior driving safety in both log-replay and safety-critical traffic scenarios on the held- out test set. Code and data are available at https://metadriverse.github.io/cat.more » « less
- 
            Abstract Vehicle behaviour prediction provides important information for decision‐making in modern intelligent transportation systems. People with different driving styles have considerably different driving behaviours and hence exhibit different behaviour tendency. However, most existing prediction methods do not consider the different tendencies in driving styles and apply the same model to all vehicles. Furthermore, most of the existing driver classification methods rely on offline learning that requires a long observation of driving history and hence are not suitable for real‐time driving behaviour analysis. To facilitate personalised models that can potentially improve vehicle behaviour prediction, the authors propose an algorithm that classifies drivers into different driving styles. The algorithm only requires data from a short observation window and it is more applicable for real‐time online applications compared with existing methods that require a long term observation. Experiment results demonstrate that the proposed algorithm can achieve consistent classification results and provide intuitive interpretation and statistical characteristics of different driving styles, which can be further used for vehicle behaviour prediction.more » « less
- 
            In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly securitycritical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes. To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problemspecific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previouslyunknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
