skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on December 13, 2024

Title: Balancing the Power Grid with Cheap Assets
We have all heard that there is growing need to secure resources to obtain supply-demand balance in a power grid facing increasing volatility from renewable sources of energy. There are mandates for utility scale battery systems in regions all over the world, and there is a growing science of “demand dispatch” to obtain virtual energy storage from flexible electric loads such as water heaters, air conditioning, and pumps for irrigation. The question addressed in this tutorial is how to manage a large number of assets for balancing the grid. The focus is on variants of the economic dispatch problem, which may be regarded as the “feed-forward” component in an overall control architecture. 1) The resource allocation problem is identical to a finite horizon optimal control problem with degenerate cost—so called “cheap control”. This implies a form of state space collapse, whose form is identified: the marginal cost for each load class evolves in a two-dimensional subspace, spanned by a scalar co-state process and its derivative. 2) The implication to distributed control is remarkable. Once the co-state process is synthesized, this common signal may be broadcast to each asset for optimal control. However, the optimal solution is extremely fragile, in a sense made clear through results from numerical studies. 3) Several remedies are proposed to address fragility. One is described through “robust training” in a particular Q-learning architecture (one approach to reinforcement learning). In numerical studies it is found that specialized training leads to more robust control solutions.  more » « less
Award ID(s):
2122313
NSF-PAR ID:
10529264
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-0124-3
Page Range / eLocation ID:
4012 to 4017
Subject(s) / Keyword(s):
Demand dispatch reinforcement learning
Format(s):
Medium: X
Location:
Singapore, Singapore
Sponsoring Org:
National Science Foundation
More Like this
  1. Alessandro Astolfi (Ed.)
    Demand dispatch is the science of extracting virtual energy storage through the automatic control of deferrable loads to provide balancing or regulation services to the grid, while maintaining consumer-end quality of service.The control of a large collection of heterogeneous loads is in part a resource allocation problem, since different classes of loads are more valuable for different services. The goal of this paper is to unveil the structure of the optimal solution to the resource allocation problem, and investigate short-term market implications. It is found that the marginal cost for each load class evolves in a two-dimensional subspace: spanned by a co-state process and its derivative. The resource allocation problem is recast to construct a dynamic competitive equilibrium model, in which the consumer utility is the negative of the cost of deviation from ideal QoS. It is found that a competitive equilibrium exists with the equilibrium price equal to the negative of an optimal co-state process. Moreover, the equilibrium price is different than what would be obtained based on the standard assumption that the consumer's utility is a function of power consumption. 
    more » « less
  2. There is enormous flexibility potential in the power consumption of the majority of electric loads. This flexibility can be harnessed to obtain services for managing the grid: with carefully designed decision rules in place, power consumption for the population of loads can be ramped up and down, just like charging and discharging a battery, without any significant impact to consumers' needs. The concept is called Demand Dispatch, and the grid resource obtained from this design virtual energy storage (VES). In order to deploy VES, a balancing authority is faced with two challenges: 1. how to design local decision rules for each load given the target aggregate power consumption (distributed control problem), and 2. how to coordinate a portfolio of resources to maintain grid balance, given a forecast of net-load (resource allocation problem).Rather than separating resource allocation and distributed control, in this paper the two problems are solved simultaneously using a single convex program. The joint optimization model is cast as a finite-horizon optimal control problem in a mean-field setting, based on the new KLQ optimal control approach proposed recently by the authors.The simplicity of the proposed control architecture is remarkable: With a large portfolio of heterogeneous flexible resources, including loads such as residential water heaters, commercial water heaters, irrigation, and utility-scale batteries, the control architecture leads to a single scalar control signal broadcast to every resource in the domain of the balancing authority. Keywords: Smart grids, demand dispatch, distributed control, controlled Markov chains. 
    more » « less
  3. null (Ed.)
    A new stochastic control methodology is introduced for distributed control, motivated by the goal of creating virtual energy storage from flexible electric loads, i.e. Demand Dispatch. In recent work, the authors have introduced Kullback- Leibler-Quadratic (KLQ) optimal control as a stochastic control methodology for Markovian models. This paper develops KLQ theory and demonstrates its applicability to demand dispatch. In one formulation of the design, the grid balancing authority simply broadcasts the desired tracking signal, and the hetero-geneous population of loads ramps power consumption up and down to accurately track the signal. Analysis of the Lagrangian dual of the KLQ optimization problem leads to a menu of solution options, and expressions of the gradient and Hessian suitable for Monte-Carlo-based optimization. Numerical results illustrate these theoretical results. 
    more » « less
  4. null (Ed.)
    Pronounced variability due to the growth of renewable energy sources, flexible loads, and distributed generation is challenging residential distribution systems. This context, motivates well fast, efficient, and robust reactive power control. Optimal reactive power control is possible in theory by solving a non-convex optimization problem based on the exact model of distribution flow. However, lack of high-precision instrumentation and reliable communications, as well as the heavy computational burden of non-convex optimization solvers render computing and implementing the optimal control challenging in practice. Taking a statistical learning viewpoint, the input-output relationship between each grid state and the corresponding optimal reactive power control (a.k.a., policy) is parameterized in the present work by a deep neural network, whose unknown weights are updated by minimizing the accumulated power loss over a number of historical and simulated training pairs, using the policy gradient method. In the inference phase, one just feeds the real-time state vector into the learned neural network to obtain the ‘optimal’ reactive power control decision with only several matrix-vector multiplications. The merits of this novel deep policy gradient approach include its computational efficiency as well as robustness to random input perturbations. Numerical tests on a 47-bus distribution network using real solar and consumption data corroborate these practical merits. 
    more » « less
  5. null (Ed.)
    Stability and reliability are of the most important concern for isolated microgrid systems that have no support from the utility grid. Interval predictions are often applied to ensure the system stability of isolated microgrids as they cover more uncertainties and robust control can be achieved based on more sufficient information. In this paper, we propose a probabilistic microgrid energy exchange method based on the Model Predictive Control (MPC) approach to make better use of the prediction intervals so that the system stability and cost efficiency of isolated microgrids are improved simultaneously. Appropriate scenarios are selected from the predictions according to the evaluation of future trends and system capacity. In the meantime, a two-stage adaptive reserve strategy is adopted to further utilize the potential of interval predictions and maintain the system security adaptively. Reserves are determined at the optimization stage to prepare some extra capacity for the fluctuations in the renewable generation and load demand at the operation stage based on the aggressive and conservative level of the system, which is automatically updated at each step. The optimal dispatch problem is finally formulated using the mixed-integer linear programming model and the MPC is formulated as an optimization problem with a discount factor introduced to adjust the weights. Case studies show that the proposed method could effectively guarantee the stability of the system and improve economic performance. 
    more » « less