skip to main content


Title: Managing Wind‐Based Electricity Generation in the Presence of Storage and Transmission Capacity

We investigate the management of a merchant wind energy farm co‐located with a grid‐level storage facility and connected to a market through a transmission line. We formulate this problem as a Markov decision process (MDP) with stochastic wind speed and electricity prices. Consistent with most deregulated electricity markets, our model allows these prices to be negative. As this feature makes it difficult to characterize any optimal policy of our MDP, we show the optimality of astage‐ and partial‐state‐dependent‐thresholdpolicy when prices can only be positive. We extend this structure when prices can also be negative to develop heuristic one (H1) that approximately solves a stochastic dynamic program. We then simplify H1 to obtain heuristic two (H2) that relies on aprice‐dependent‐thresholdpolicy and derivative‐free deterministic optimization embedded within a Monte Carlo simulation of the random processes of our MDP. We conduct an extensive and data‐calibrated numerical study to assess the performance of these heuristics and variants of known ones against the optimal policy, as well as to quantify the effect of negative prices on the value added by and environmental benefit of storage. We find that (i) H1 computes an optimal policy and on average is about 17 times faster to execute than directly obtaining an optimal policy; (ii) H2 has a near optimal policy (with a 2.86% average optimality gap), exhibits a two orders of magnitude average speed advantage over H1, and outperforms the remaining considered heuristics; (iii) storage brings in more value but its environmental benefit falls as negative electricity prices occur more frequently in our model.

 
more » « less
NSF-PAR ID:
10080279
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Production and Operations Management
Volume:
28
Issue:
4
ISSN:
1059-1478
Format(s):
Medium: X Size: p. 970-989
Size(s):
["p. 970-989"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Problem definition: Inspired by new developments in dynamic spectrum access, we study the dynamic pricing of wireless Internet access when demand and capacity (bandwidth) are stochastic. Academic/practical relevance: The demand for wireless Internet access has increased enormously. However, the spectrum available to wireless service providers is limited. The industry has, thus, altered conventional license-based spectrum access policies through unlicensed spectrum operations. The additional spectrum obtained through these operations has stochastic capacity. Thus, the pricing of this service by the service provider has novel challenges. The problem considered in this paper is, therefore, of high practical relevance and new to the academic literature. Methodology: We study this pricing problem using a Markov decision process model in which customers are posted dynamic prices based on their bandwidth requirement and the available capacity. Results: We characterize the structure of the optimal pricing policy as a function of the system state and of the input parameters. Because it is impossible to solve this problem for practically large state spaces, we propose a heuristic dynamic pricing policy that performs very well, particularly when the ratio of capacity to demand rate is low. Managerial implications: We demonstrate the value of using a dynamic heuristic pricing policy compared with the myopic and optimal static policies. The previous literature has studied similar systems with fixed capacity and has characterized conditions under which myopic policies perform well. In contrast, our setting has dynamic (stochastic) capacity, and we find that identifying good state-dependent heuristic pricing policies is of greater importance. Our heuristic policy is computationally more tractable and easier to implement than the optimal dynamic and static pricing policies. It also provides a significant performance improvement relative to the myopic and optimal static policies when capacity is scarce, a condition that holds for the practical setting that motivated this research. 
    more » « less
  2. Abstract Purpose

    We consider the following scenario: A radiotherapy clinic has a limited number of proton therapy slots available each day to treat cancer patients of a given tumor site. The clinic's goal is to minimize the expected number of complications in the cohort of all patients of that tumor site treated at the clinic, and thereby maximize the benefit of its limited proton resources.

    Methods

    To address this problem, we extend the normal tissue complication probability (NTCP) model–based approach to proton therapy patient selection to the situation of limited resources at a given institution. We assume that, on each day, a newly diagnosed patient is scheduled for treatment at the clinic with some probability and with some benefit from protons over photons, which is drawn from a probability distribution. When a new patient is scheduled for treatment, a decision for protons or photons must be made, and a patient may wait only for a limited amount of time for a proton slot becoming available. The goal is to determine the thresholds for selecting a patient for proton therapy, which optimally balance the competing goals of making use of all available slots while not blocking slots with patients with low benefit. This problem can be formulated as a Markov decision process (MDP) and the optimal thresholds can be determined via a value‐policy iteration method.

    Results

    The optimal thresholds depend on the number of available proton slots, the average number of patients under treatment, and the distribution of values. In addition, the optimal thresholds depend on the current utilization of the facility. For example, if one proton slot is available and a second frees up shortly, the optimal threshold is lower compared to a situation where all but one slot remain blocked for longer.

    Conclusions

    MDP methodology can be used to augment current NTCP model–based patient selection methods to the situation that, on any given day, the number of proton slots is limited. The optimal threshold then depends on the current utilization of the proton facility. Although, the optimal policy yields only a small nominal benefit over a constant threshold, it is more robust against variations in patient load.

     
    more » « less
  3. Electricity bill constitutes a significant portion of operational costs for large scale data centers. Empowering data centers with on-site storages can reduce the electricity bill by shaping the energy procurement from deregulated electricity markets with real-time price fluctuations. This work focuses on designing energy procurement and storage management strategies to minimize the electricity bill of storage-assisted data centers. Designing such strategies is challenging since the net energy demand of the data center and electricity market prices are not known in advance, and the underlying problem is coupled over time due to evolution of the storage level. Using competitive ratio as the performance measure, we propose an online algorithm that determines the energy procurement and storage management strategies using a threshold based policy. Our algorithm achieves the optimal competitive ratio of as a function of the price fluctuation ratio. We validate the algorithm using data traces from electricity markets and data-center energy demands. The results show that our algorithm achieves close to the offline optimal performance and outperforms existing alternatives.% 
    more » « less
  4. Home energy management system (HEMS) enables residents to actively participate in demand response (DR) programs. It can autonomously optimize the electricity usage of home appliances to reduce the electricity cost based on time-varying electricity prices. However, due to the existence of randomness in the pricing process of the utility and resident's activities, developing an efficient HEMS is challenging. To address this issue, we propose a novel home energy management method for optimal scheduling of different kinds of home appliances based on deep reinforcement learning (DRL). Specifically, we formulate the home energy management problem as an MDP considering the randomness of real-time electricity prices and resident's activities. A DRL approach based on proximal policy optimization (PPO) is developed to determine the optimal DR scheduling strategy. The proposed approach does not need any information on the appliances' models and distribution knowledge of the randomness. Simulation results verify the effectiveness of our proposed approach. 
    more » « less
  5. Environmental concerns and rising grid prices have motivated data center owners to invest in on-site renewable energy sources. How- ever, these sources present challenges as they are unreliable and intermittent. In an effort to mitigate these issues, data centers are incorporating energy storage systems. This introduces the oppor- tunity for electricity bill reduction, as energy storage can be used for power market arbitrage. We present two supervised learning-based algorithms, LearnBuy, that learns the amount to purchase, and LearnStore, that learns the amount to store, to solve this energy procurement problem. These algorithms utilize the idea of "learning from optimal" by using the values generated by the offline optimization as a label for training. We test our algorithms on a general case, considering buying and selling back to the grid, and a special case, considering only buying from the grid. In the general case, LearnStore achieves a 10-16% reduction compared to baseline heuristics, whereas in the special case, LearnBuy achieves a 7% reduction compared to prior art. 
    more » « less