Safe operations of autonomous mobile robots in close proximity to humans, creates a need for enhanced trajectory tracking (with low tracking errors). Linear optimal control techniques such as Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) have been used successfully for low-speed applications while leveraging their model-based methodology with manageable computational demands. However, model and parameter uncertainties or other unmodeled nonlinearities may cause poor control actions and constraint violations. Nonlinear MPC has emerged as an alternate optimal-control approach but needs to overcome real-time deployment challenges (including fast sampling time, design complexity, and limited computational resources). In recent years, the optimal control-based deployments have benefitted enormously from the ability of Deep Neural Networks (DNNs) to serve as universal function approximators. This has led to deployments in a plethora of previously inaccessible applications – but many aspects of generalizability, benchmarking, and systematic verification and validation coupled with benchmarking have emerged. This paper presents a novel approach to fusing Deep Reinforcement Learning-based (DRL) longitudinal control with a traditional PID lateral controller for autonomous navigation. Our approach follows (i) Generation of an adequate fidelity simulation scenario via a Real2Sim approach; (ii) training a DRL agent within this framework; (iii) Testing the performance and generalizability on alternate scenarios. We use an initial tuned set of the lateral PID controller gains for observing the vehicle response over a range of velocities. Then we use a DRL framework to generate policies for an optimal longitudinal controller that successfully complements the lateral PID to give the best tracking performance for the vehicle.
more »
« less
Autonomous Vehicle Path Tracking Using Event‐Triggered MPC With Switching Model: Methodology and Real‐World Validation
ABSTRACT Model predictive control (MPC) is advantageous for autonomous vehicle path tracking but suffers from high computational complexity for real‐time implementation. Event‐triggered MPC aims to reduce this burden by optimizing the control inputs only when needed instead of every time step. Existing works in literature have been focused on algorithmic development and simulation validation for very specific scenarios. Therefore, event‐triggered MPC in real‐world full‐size vehicle has not been thoroughly investigated. This work develops event‐triggered MPC with switching model for autonomous vehicle lateral motion control, and implements it on a production vehicle for real‐world validation. Experiments are conducted under both closed road and open road environments, with both low speed and high speed maneuvers, as well as stop‐and‐go scenarios. The efficacy of the proposed event‐triggered MPC, in terms of computational load saving without sacrificing control performance, is clearly demonstrated. It is also demonstrated that event‐triggered MPC can sometimes improve the control performance, even with less number of optimizations, thus contradicting to existing conclusions drawn from simulation.
more »
« less
- Award ID(s):
- 2237317
- PAR ID:
- 10644439
- Publisher / Repository:
- DOI PREFIX: 10.1049
- Date Published:
- Journal Name:
- IET Control Theory & Applications
- Volume:
- 19
- Issue:
- 1
- ISSN:
- 1751-8644
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract To extend the operation window of batteries, active cell balancing has been studied in the literature. However, such an advancement presents significant computational challenges on real-time optimal control, especially when the number of cells in a battery increases. This article investigates the use of reinforcement learning (RL) and model predictive control (MPC) to effectively balance battery cells while at the same time keeping the computational load at a minimum. Specifically, event-triggered MPC is introduced as a way to reduce real-time computation. Different from the existing literature where rule-based or threshold-based event-trigger policies are used to determine the event instances, deep RL is explored to learn and optimize the event-trigger policy. Simulation results demonstrate that the proposed framework can keep the cell state-of-charge variation under 1% while using less than 1% computational resources compared to conventional MPC.more » « less
-
Optimal control techniques such as model predictive control (MPC) have been widely studied and successfully applied across a diverse field of applications. However, the large computational requirements for these methods result in a significant challenge for embedded applications. While event-triggered MPC (eMPC) is one solution that could address this issue by taking advantage of the prediction horizon, one obstacle that arises with this approach is that the event-trigger policy is complex to design to fulfill both throughput and control performance requirements. To address this challenge, this paper proposes to design the event trigger by training a deep Q-network reinforcement learning agent (RLeMPC) to learn the optimal event-trigger policy. This control technique was applied to an active-cell-balancing controller for the range extension of an electric vehicle battery. Simulation results with MPC, eMPC, and RLeMPC control policies are presented along with a discussion of the challenges of implementing RLeMPC.more » « less
-
Abstract For simulation to be an effective tool for the development and testing of autonomous vehicles, the simulator must be able to produce realistic safety-critical scenarios with distribution-level accuracy. However, due to the high dimensionality of real-world driving environments and the rarity of long-tail safety-critical events, how to achieve statistical realism in simulation is a long-standing problem. In this paper, we develop NeuralNDE, a deep learning-based framework to learn multi-agent interaction behavior from vehicle trajectory data, and propose a conflict critic model and a safety mapping network to refine the generation process of safety-critical events, following real-world occurring frequencies and patterns. The results show that NeuralNDE can achieve both accurate safety-critical driving statistics (e.g., crash rate/type/severity and near-miss statistics, etc.) and normal driving statistics (e.g., vehicle speed/distance/yielding behavior distributions, etc.), as demonstrated in the simulation of urban driving environments. To the best of our knowledge, this is the first time that a simulation model can reproduce the real-world driving environment with statistical realism, particularly for safety-critical situations.more » « less
-
null (Ed.)For energy-efficient Connected and Automated Vehicle (CAV) Eco-driving control on signalized arterials under uncertain traffic conditions, this paper explicitly considers traffic control devices (e.g., road markings, traffic signs, and traffic signals) and road geometry (e.g., road shapes, road boundaries, and road grades) constraints in a data-driven optimization-based Model Predictive Control (MPC) modeling framework. This modeling framework uses real-time vehicle driving and traffic signal data via Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications. In the MPC-based control model, this paper mathematically formulates location-based traffic control devices and road geometry constraints using the geographic information from High-Definition (HD) maps. The location-based traffic control devices and road geometry constraints have the potential to improve the safety, energy, efficiency, driving comfort, and robustness of connected and automated driving on real roads by considering interrupted flow facility locations and road geometry in the formulation. We predict a set of uncertain driving states for the preceding vehicles through an online learning-based driving dynamics prediction model. We then solve a constrained finite-horizon optimal control problem with the predicted driving states to obtain a set of Eco-driving references for the controlled vehicle. To obtain the optimal acceleration or deceleration commands for the controlled vehicle with the set of Eco-driving references, we formulate a Distributionally Robust Stochastic Optimization (DRSO) model (i.e., a special case of data-driven optimization models under moment bounds) with Distributionally Robust Chance Constraints (DRCC) with location-based traffic control devices and road geometry constraints. We design experiments to demonstrate the proposed model under different traffic conditions using real-world connected vehicle trajectory data and Signal Phasing and Timing (SPaT) data on a coordinated arterial with six actuated intersections on Fuller Road in Ann Arbor, Michigan from the Safety Pilot Model Deployment (SPMD) project.more » « less
An official website of the United States government
