Abstract Deep Reinforcement Learning (DRL) has shown promise for voltage control in power systems due to its speed and model‐free nature. However, learning optimal control policies through trial and error on a real grid is infeasible due to the mission‐critical nature of power systems. Instead, DRL agents are typically trained on a simulator, which may not accurately represent the real grid. This discrepancy can lead to suboptimal control policies and raises concerns for power system operators. In this paper, we revisit the problem of RL‐based voltage control and investigate how model inaccuracies affect the performance of the DRL agent. Extensive numerical experiments are conducted to quantify the impact of model inaccuracies on learning outcomes. Specifically, techniques that enable the DRL agent are focused on learning robust policies that can still perform well in the presence of model errors. Furthermore, the impact of the agent's decisions on the overall system loss are analyzed to provide additional insight into the control problem. This work aims to address the concerns of power system operators and make DRL‐based voltage control more practical and reliable. 
                        more » 
                        « less   
                    
                            
                            Implementing Deep Reinforcement Learning-Based Grid Voltage Control in Real-World Power Systems: Challenges and Insights
                        
                    
    
            Deep reinforcement learning (DRL) holds significant promise for managing voltage control challenges in simulated power grid environments. However, its real-world application in power system operations remains underexplored. This study rigorously evaluates DRL’s performance and limitations within actual operational contexts by utilizing detailed experiments across the IEEE 14-bus system, Illinois 200-bus system, and the ISO New England node-breaker model. Our analysis critically assesses DRL’s effectiveness for grid control from a system operator's perspective, identifying specific performance bottlenecks. The findings provide actionable insights that highlight the necessity of advancing AI technologies to effectively address the growing complexities of modern power systems. This research underscores the vital role of DRL in enhancing grid management and reliability. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10614684
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-9042-1
- Page Range / eLocation ID:
- 1 to 5
- Subject(s) / Keyword(s):
- Deep reinforcement learning, autonomous voltage control, model fidelity, topology change.
- Format(s):
- Medium: X
- Location:
- Dubrovnik, Croatia
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Mostafa Sahraei-Ardakani; Mingxi Liu (Ed.)This paper explores the application of deep reinforcement learning (DRL) to create a coordinating mechanism between synchronous generators (SGs) and distributed energy resources (DERs) for improved primary frequency regulation. Renewable energy sources, such as wind and solar, may be used to aid in frequency regulation of the grid. Without proper coordination between the sources, however, the participation only results in a delay of SG governor response and frequency deviation. The proposed DRL application uses a deep deterministic policy gradient (DDPG) agent to create a generalized coordinating signal for DERs. The coordinating signal communicates the degree of distributed participation to the SG governor, resolving delayed governor response and reducing system rate of change of frequency (ROCOF). The validity of the coordinating signal is presented with a single-machine finite bus system. The use of DRL for signal creation is explored in an under-frequency event. While further exploration is needed for validation in large systems, the development of this concept shows promising results towards increased power grid stabilization.more » « less
- 
            Distributed optimization is becoming popular to solve a large power system problem with the objective of reducing computational complexity. To this end, the convergence performance of distributed optimization plays an important role to solve an optimal power flow (OPF) problem. One of the critical factors that have a significant impact on the convergence performance is the reference bus location. Since selecting the reference bus location does not affect the result of centralized DC OPF, we can change the location of the reference bus to get more accurate results in distributed optimization. In this paper, our goal is to provide some insights into how to select reference bus location to have a better convergence performance. We modeled the power grid as a graph and based on some graph theory concepts, for each bus in the grid a score is assigned, and then we cluster buses to find out which buses are more suitable to be considered as the reference bus. We implement the analytical target cascading (ATC) on the IEEE 48-bus system to solve a DC OPF problem. The results show that by selecting a proper reference bus, we obtained more accurate results with an excellent convergence rate while improper selection may take much more iterations to converge.more » « less
- 
            This paper progresses on the development of the discrete electromechanical oscillation control (DEOC). The DEOC approach is based on the step-wisely control of electronically-interfaced resources' (EIR) power output and aims to significantly reduce the amplitude of multiple oscillatory modes in power systems. The theoretical formulation of the problem and the proposed solution is described. This work addresses the issues of a nonlinear grid representation and favorable reduction of control actions from EIRs, as well as their impact on the DEOC performance. Simulations on a 9-bus system validate the effectiveness of the proposed control even when highly load scenarios are considered.more » « less
- 
            In this work, we investigate grid-forming control for power systems containing three-phase and single-phase converters connected to unbalanced distribution and transmission networks, investigate self-balancing between single-phase converters, and propose a novel balancing feedback for grid-forming control that explicitly allows to trade-off unbalances in voltage and power. We develop a quasi-steady-state power network model that allows to analyze the interactions between three-phase and single-phase power converters across transmission, distribution, and standard transformer interconnections. We first investigate conditions under which this general network admits a well-posed kron-reduced quasi-steady-state network model. Our main contribution leverages this reduced-order model to develop analytical conditions for stability of the overall network with grid-forming three-phase and single-phase converters connected through standard transformer interconnections. Specifically, we provide conditions on the network topology under which (i) single-phase converters autonomously self-synchronize to a phase-balanced operating point and (ii) single-phase converters phase-balance through synchronization with three-phase converters. Moreover, we establish that the conditions can be relaxed if a phase-balancing feedback control is used. Finally, case studies combining detailed models of transmission systems (i.e., IEEE 9-bus) and distribution systems (i.e., IEEE 13-bus) are used to illustrate the results for (i) a power system containing a mix of transmission and distribution connected converters and, (ii) a power system solely using distribution-connected converters at the grid edge.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    