Optimal control techniques such as model predictive control (MPC) have been widely studied and successfully applied across a diverse field of applications. However, the large computational requirements for these methods result in a significant challenge for embedded applications. While event-triggered MPC (eMPC) is one solution that could address this issue by taking advantage of the prediction horizon, one obstacle that arises with this approach is that the event-trigger policy is complex to design to fulfill both throughput and control performance requirements. To address this challenge, this paper proposes to design the event trigger by training a deep Q-network reinforcement learning agent (RLeMPC) to learn the optimal event-trigger policy. This control technique was applied to an active-cell-balancing controller for the range extension of an electric vehicle battery. Simulation results with MPC, eMPC, and RLeMPC control policies are presented along with a discussion of the challenges of implementing RLeMPC.
more »
« less
This content will become publicly available on April 1, 2026
Reinforcement Learning-Based Event-Triggered Model Predictive Control for Electric Vehicle Active Battery Cell Balancing
Abstract To extend the operation window of batteries, active cell balancing has been studied in the literature. However, such an advancement presents significant computational challenges on real-time optimal control, especially when the number of cells in a battery increases. This article investigates the use of reinforcement learning (RL) and model predictive control (MPC) to effectively balance battery cells while at the same time keeping the computational load at a minimum. Specifically, event-triggered MPC is introduced as a way to reduce real-time computation. Different from the existing literature where rule-based or threshold-based event-trigger policies are used to determine the event instances, deep RL is explored to learn and optimize the event-trigger policy. Simulation results demonstrate that the proposed framework can keep the cell state-of-charge variation under 1% while using less than 1% computational resources compared to conventional MPC.
more »
« less
- Award ID(s):
- 2237317
- PAR ID:
- 10652038
- Publisher / Repository:
- ASME
- Date Published:
- Journal Name:
- ASME Letters in Dynamic Systems and Control
- Volume:
- 5
- Issue:
- 2
- ISSN:
- 2689-6117
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper presents and compares two model predictive control (MPC) approaches for battery cell state-of-charge (SOC) balancing. In both approaches, a linearized discrete-time model that takes into account individual cell capacities is used. The first approach is a linear MPC controller that effectively regulates multiple cells to track a target SOC level while satisfying physical constraints. The second approach is based on explicit MPC implementation to reduce online computation while achieving a comparable performance. The simulation results suggest that explicit MPC can deliver the same balancing performance as linear MPC, while achieving faster online execution. Specifically, explicit MPC reduces the computation time by 37.3% in a five-cell battery example, with the cost of higher offline computation. However, simulation results also reveal a significant limitation for explicit MPC for battery systems with a larger number of cells. As the number of cells increases and/or the prediction horizon increases, the computational requirements grow exponentially, making its application to SOC balancing for large battery systems impractical. To the best of the authors’ knowledge, this is the first study that compares MPC and explicit MPC algorithms in the context of battery cell balancing.more » « less
-
ABSTRACT Model predictive control (MPC) is advantageous for autonomous vehicle path tracking but suffers from high computational complexity for real‐time implementation. Event‐triggered MPC aims to reduce this burden by optimizing the control inputs only when needed instead of every time step. Existing works in literature have been focused on algorithmic development and simulation validation for very specific scenarios. Therefore, event‐triggered MPC in real‐world full‐size vehicle has not been thoroughly investigated. This work develops event‐triggered MPC with switching model for autonomous vehicle lateral motion control, and implements it on a production vehicle for real‐world validation. Experiments are conducted under both closed road and open road environments, with both low speed and high speed maneuvers, as well as stop‐and‐go scenarios. The efficacy of the proposed event‐triggered MPC, in terms of computational load saving without sacrificing control performance, is clearly demonstrated. It is also demonstrated that event‐triggered MPC can sometimes improve the control performance, even with less number of optimizations, thus contradicting to existing conclusions drawn from simulation.more » « less
-
null (Ed.)With increase in the frequency of natural disasters such as hurricanes that disrupt the supply from the grid, there is a greater need for resiliency in electric supply. Rooftop solar photovoltaic (PV) panels along with batteries can provide resiliency to a house in a blackout due to a natural disaster. Our previous work showed that intelligence can reduce the size of a PV+battery system for the same level of post-blackout service compared to a conventional system that does not employ intelligent control. The intelligent controller proposed is based on model predictive control (MPC), which has two main challenges. One, it requires simple yet accurate models as it involves real-time optimization. Two, the discrete actuation for residential loads (on/off) makes the underlying optimization problem a mixed-integer program (MIP) which is challenging to solve. An attractive alternative to MPC is reinforcement learning (RL) as the real-time control computation is both model-free and simple. These points of interest accompany certain trade-offs; RL requires computationally expensive offline learning, and its performance is sensitive to various design choices. In this work, we propose an RL-based controller. We compare its performance with the MPC controller proposed in our prior work and a non-intelligent baseline controller. The RL controller is found to provide a resiliency performance — by commanding critical loads and batteries—similar to MPC with a significant reduction in computational effort.more » « less
-
Range anxiety and lack of adequate access to fast charging are proving to be important impediments to electric vehicle (EV) adoption. While many techniques to fast charging EV batteries (model-based & model-free) have been developed, they have focused on a single Lithium-ion cell. Extensions to battery packs are scarce, often considering simplified architectures (e.g., series-connected) for ease of modeling. Computational considerations have also restricted fast-charging simulations to small battery packs, e.g., four cells (for both series and parallel connected cells). Hence, in this paper, we pursue a model-free approach based on reinforcement learning (RL) to fast charge a large battery pack (comprising 444 cells). Each cell is characterized by an equivalent circuit model coupled with a second-order lumped thermal model to simulate the battery behavior. After training the underlying RL, the developed model will be straightforward to implement with low computational complexity. In detail, we utilize a Proximal Policy Optimization (PPO) deep RL as the training algorithm. The RL is trained in such a way that the capacity loss due to fast charging is minimized. The pack’s highest cell surface temperature is considered an RL state, along with the pack’s state of charge. Finally, in a detailed case study, the results are compared with the constant current-constant voltage (CC-CV) approach, and the outperformance of the RL-based approach is demonstrated. Our proposed PPO model charges the battery as fast as a CC-CV with a 5C constant stage while maintaining the temperature as low as a CC-CV with a 4C constant stage.more » « less
An official website of the United States government
