Title: F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning
The deployment and evaluation of learning algorithms on autonomous vehicles (AV) is expensive, slow, and potentially unsafe. This paper details the F1TENTH autonomous racing platform, an open-source evaluation framework for training, testing, and evaluating autonomous systems. With 1/10th-scale low-cost hardware and multiple virtual environments, F1TENTH enables safe and rapid experimentation of AV algorithms even in laboratory research settings. We present three benchmark tasks and baselines in the setting of autonomous racing, demonstrating the flexibility and features of our evaluation environment. more »« less
The deployment and evaluation of learning algorithms on autonomous vehicles (AV) is expensive, slow, and potentially unsafe. This paper details the F1TENTH autonomous racing platform, an open-source evaluation framework for training, testing, and evaluating autonomous systems. With 1/10th-scale low-cost hardware and multiple virtual environments, F1TENTH enables safe and rapid experimentation of AV algorithms even in laboratory research settings. We present three benchmark tasks and baselines in the set- ting of autonomous racing, demonstrating the flexibility and features of our evaluation environment.
In the ever-evolving landscape of autonomous vehicles, competition and research of high-speed autonomous racing emerged as a captivating frontier, pushing the limits of perception, planning, and control. Autonomous racing presents a setup where the intersection of cutting-edge software and hardware development sparks unprecedented opportunities and confronts unique challenges. The motorsport axiom, “If everything seems under control, then you are not going fast enough,” resonates in this special issue, underscoring the demand for algorithms and hardware that can navigate at the cutting edge of control, traction, and agility. In pursuing autonomy at high speeds, the racing environment becomes a crucible, pushing autonomous vehicles to execute split-second decisions with high precision. Autonomous racing, we believe, offers a litmus test for the true capabilities of self-driving software. Just as racing has historically served as a proving ground for automotive technology, autonomous racing now presents itself as the crucible for testing self-driving algorithms. While routine driving situations dominate much of the autonomous vehicle operations, focusing on extreme situations and environments is crucial to support investigation into safety benefits. The urgency of advancing highspeed autonomy is palpable in burgeoning autonomous racing competitions like Formula Student Driverless, F1TENTH autonomous racing, Roborace, and the Indy Autonomous Challenge. These arenas provide a literal testbed for testing perception, planning, and control algorithms and symbolize the accelerating traction of autonomous racing as a proving ground for agile and safe autonomy. Our special issue focuses on cutting-edge research into software and hardware solutions for highspeed autonomous racing. We sought contributions from the robotics and autonomy communities that delve into the intricacies of head-to-head multi-agent racing: modeling vehicle dynamics at high speeds, developing advanced perception, planning, and control algorithms, as well as the demonstration of algorithms, in simulation and in real-world vehicles. While presenting recent developments for autonomous racing, we believe these special issue papers will also create an impact in the broader realm of autonomous vehicles.
Autonomous cyber-physical systems must be able to operate safely in a wide range of complex environments. To ensure safety without limiting mitigation options, these systems require detection of safety violations by mitigation trigger deadlines. As a result of these system’s complex environments, multimodal prediction is often required. For example, an autonomous vehicle (AV) operates in complex traffic scenes that result in any given vehicle having the ability to exhibit several plausible future behavior modes (e.g., stop, merge, turn, etc.); therefore, to ensure collision avoidance, an AV must be able to predict the possible multimodal behaviors of nearby vehicles. In previous work, model predictive runtime verification (MPRV) successfully detected future violations by a given deadline, but MPRV only considers a single mode of prediction (i.e., unimodal prediction). We design multimodal model predictive runtime verification (MMPRV) to extend MPRV to consider multiple modes of prediction, and we introduce Predictive Mission-Time Linear Temporal Logic (PMLTL) as an extension of MLTL to support the evaluation of probabilistic multimodal predictions. We examine the correctness and real-time feasibility of MMPRV through two AV case studies where MMPRV utilizes (1) a physics-based multimodal predictor on the F1Tenth autonomous racing vehicle and (2) current state-of-the-art deep neural network multimodal predictors trained and evaluated on the Argoverse motion forecasting dataset. We found that the ability to meet real-time requirements was a challenge for the latter, especially when targeting an embedded computing platform.
Jiang, Minghao; Miller, Kristina; Sun, Dawei; Liu, Zexiang; Jia, Yixuan; Datta, Arnab; Ozay, Necmiye; Mitra, Sayan
(, IEEE ICRA 2021, International Conference on Robotics and Automation, Workshop on OPPORTUNITIES AND CHALLENGES WITH AUTONOMOUS RACING)
Self-driving autonomous vehicles (AVs) have recently gained popularity as a research topic. The safety of AVs is exceptionally important as failure in the design of an AV could lead to catastrophic consequences. AV systems are highly heterogeneous with many different and complex components, so it is difficult to perform end-to-end testing. One solution to this dilemma is to evaluate AVs using simulated racing competition. In this thesis, we present a simulated autonomous racing competition, Generalized RAcing Intelligence Competition (GRAIC). To compete in GRAIC, participants need to submit their controller files which are deployed on a racing ego-vehicle on different race tracks. To evaluate the submitted controller, we also developed a testing pipeline, Autonomous System Operations (AutOps). AutOps is an automated, scalable, and fair testing pipeline developed using software engineering techniques such as continuous integration, containerization, and serverless computing. In order to evaluate the submitted controller in non-trivial circumstances, we populate the race tracks with scenarios, which are pre-defined traffic situations commonly seen in the real road. We present a dynamic scenario testing strategy that generates new scenarios based on results of the ego-vehicle passing through previous scenarios.
Agnihotri, Abhijeet; O'Kelly, Matthew; Mangharam, Rahul; Abbas, Houssam
(, Proceedings of the 51st ACM Technical Symposium on Computer Science Education)
null
(Ed.)
Teaching autonomous systems is challenging because it is a rapidly advancing cross-disciplinary field that requires theory to be continually validated on physical platforms. For an autonomous vehicle (AV) to operate correctly, it needs to satisfy safety and performance properties that depend on the operational context and interaction with environmental agents, which can be difficult to anticipate and capture. This paper describes a senior undergraduate level course on the design, programming and racing of 1/10th-scale autonomous race cars. We explore AV safety and performance concepts at the limits of perception, planning, and control, in a highly interactive and competitive environment. The course includes an ethics-centered design philosophy, which seeks to engage the students in an analysis of ethical and socio-economic implications of autonomous systems. Our hypothesis is that $1/10th-scale autonomous vehicles sufficiently capture the scaled dynamics, sensing modalities, decision making and risks of real autonomous vehicles, but are a safe and accessible platform to teach the foundations of autonomous systems. We describe the design, deployment and feedback from two offerings of this class for college seniors and graduate students, open-source community development across 36 universities, international racing competitions, student skill enhancement and employability, and recommendations for tailoring it to various settings.
O'Kelly, Matthew and, and Mangharam, Rahul. F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning. Retrieved from https://par.nsf.gov/biblio/10221878. Proceedings of the NeurIPS 2019 Competition and Demonstration Track .
O'Kelly, Matthew and, & Mangharam, Rahul. F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning. Proceedings of the NeurIPS 2019 Competition and Demonstration Track, (). Retrieved from https://par.nsf.gov/biblio/10221878.
O'Kelly, Matthew and, and Mangharam, Rahul.
"F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning". Proceedings of the NeurIPS 2019 Competition and Demonstration Track (). Country unknown/Code not available. https://par.nsf.gov/biblio/10221878.
@article{osti_10221878,
place = {Country unknown/Code not available},
title = {F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning},
url = {https://par.nsf.gov/biblio/10221878},
abstractNote = {The deployment and evaluation of learning algorithms on autonomous vehicles (AV) is expensive, slow, and potentially unsafe. This paper details the F1TENTH autonomous racing platform, an open-source evaluation framework for training, testing, and evaluating autonomous systems. With 1/10th-scale low-cost hardware and multiple virtual environments, F1TENTH enables safe and rapid experimentation of AV algorithms even in laboratory research settings. We present three benchmark tasks and baselines in the setting of autonomous racing, demonstrating the flexibility and features of our evaluation environment.},
journal = {Proceedings of the NeurIPS 2019 Competition and Demonstration Track},
author = {O'Kelly, Matthew and and Mangharam, Rahul},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.