skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Problem-Driven Scenario Reduction Framework for Power System Stochastic Operation
Scenario reduction (SR) aims to identify a small yet representative scenario set to depict the underlying uncertainty, which is critical to scenario-based stochastic optimization (SBSO) of power systems. Existing SR techniques commonly aim to achieve statistical approximation to the original scenario set. However, SR and SBSO are commonly considered as two distinct and decoupled processes, which cannot guarantee a superior approximation of the original optimality. Instead, this paper incorporates the SBSO problem structure into the SR process and introduces a novel problem-driven scenario reduction (PDSR) framework. Specifically, we project the original scenario set in distribution space onto the mutual decision applicability between scenarios in problem space. Subsequently, the SR process, embedded by a distinctive problem-driven distance metric, is rendered as a mixed-integer linear programming formulation to obtain the representative scenario set while minimizing the optimality gap. Furthermore, ex-ante and ex-post problem-driven evaluation indices are proposed to evaluate the SR performance. Numerical experiments on two two-stage stochastic economic dispatch problems validate the effectiveness of PDSR, and demonstrate that PDSR significantly outperforms existing SR methods by identifying salient (e.g., worst-case) scenarios, and achieving an optimality gap of less than 0.1% within acceptable computation time.  more » « less
Award ID(s):
2047306
PAR ID:
10575494
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE Transactions on Power Systems
Date Published:
Journal Name:
IEEE Transactions on Power Systems
ISSN:
0885-8950
Page Range / eLocation ID:
1 to 14
Subject(s) / Keyword(s):
Optimization Uncertainty Measurement Power systems Scalability Linear programming Decision making Costs Active distribution networks Voltage Problem-driven scenario reduction stochastic optimization worst-case scenario risk management
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Decision making under uncertainty often requires choosing packages, or bags of tuples, that collectively optimize expected outcomes while limiting risks. Processing Stochastic Package Queries (SPQs) involves solving very large optimization problems on uncertain data. Monte Carlo methods create numerous scenarios, or sample realizations of the stochastic attributes of all the tuples, and generate packages with optimal objective values across these scenarios. The number of scenarios needed for accurate approximation---and hence the size of the optimization problem when using prior methods---increases with variance in the data, and the search space of the optimization problem increases exponentially with the number of tuples in the relation. Existing solvers take hours to process SPQs on large relations containing stochastic attributes with high variance. Besides enriching the SPaQL language to capture a broader class of risk specifications, we make two fundamental contributions toward scalable SPQ processing. First, we propose risk-constraint linearization (RCL), which converts SPQs into Integer Linear Programs (ILPs) whose size is independent of the number of scenarios used. Solving these ILPs gives us feasible and near-optimal packages. Second, we propose Stochastic Sketch Refine, a divide and conquer framework that breaks down a large stochastic optimization problem into subproblems involving smaller subsets of tuples. Our experiments show that, together, RCL and Stochastic Sketch Refine produce high-quality packages in orders of magnitude lower runtime than the state of the art. 
    more » « less
  2. Braverman, Mark (Ed.)
    We develop approximation algorithms for set-selection problems with deterministic constraints, but random objective values, i.e., stochastic probing problems. When the goal is to maximize the objective, approximation algorithms for probing problems are well-studied. On the other hand, few techniques are known for minimizing the objective, especially in the adaptive setting, where information about the random objective is revealed during the set-selection process and allowed to influence it. For minimization problems in particular, incorporating adaptivity can have a considerable effect on performance. In this work, we seek approximation algorithms that compare well to the optimal adaptive policy. We develop new techniques for adaptive minimization, applying them to a few problems of interest. The core technique we develop here is an approximate reduction from an adaptive expectation minimization problem to a set of adaptive probability minimization problems which we call threshold problems. By providing near-optimal solutions to these threshold problems, we obtain bicriteria adaptive policies. We apply this method to obtain an adaptive approximation algorithm for the Min-Element problem, where the goal is to adaptively pick random variables to minimize the expected minimum value seen among them, subject to a knapsack constraint. This partially resolves an open problem raised in [Goel et al., 2010]. We further consider three extensions on the Min-Element problem, where our objective is the sum of the smallest k element-weights, or the weight of the min-weight basis of a given matroid, or where the constraint is not given by a knapsack but by a matroid constraint. For all three of the variations we explore, we develop adaptive approximation algorithms for their corresponding threshold problems, and prove their near-optimality via coupling arguments. 
    more » « less
  3. Event logs contain abundant information, such as activity names, time stamps, activity executors, etc. However, much of existing trace clustering research has been focused on applying activity names to assist process scenarios discovery. In addition, many existing trace clustering algorithms commonly used in the literature, such as k-means clustering approach, require prior knowledge about the number of process scenarios existed in the log, which sometimes are not known aprior. This paper presents a two-phase approach that obtains timing information from event logs and uses the information to assist process scenario discoveries without requiring any prior knowledge about process scenarios. We use five real-life event logs to compare the performance of the proposed two-phase approach for process scenario discoveries with the commonly used k-means clustering approach in terms of model’s harmonic mean of the weighted average fitness and precision, i.e., the F1 score. The experiment data shows that (1) the process scenario models obtained with the additional timing information have both higher fitness and precision scores than the models obtained without the timing information; (2) the two-phase approach not only removes the need for prior information related to k, but also results in a comparable F1 score compared to the optimal k-means approach with the optimal k obtained through exhaustive search. 
    more » « less
  4. null (Ed.)
    We show that the Adaptive Greedy algorithm of Golovin and Krause achieves an approximation bound of (ln(Q/η)+1) for Stochastic Submodular Cover: here Q is the “goal value” and η is the minimum gap between Q and any attainable utility value Q' 
    more » « less
  5. Wildfires pose an escalating risk to communities and infrastructure, especially in regions undergoing increased fuel dryness and temperature extremes driven by climate change, as well as continued expansion into the wildland-urban interface (WUI). Probabilistic wildfire risk assessment provides a rigorous means of quantifying potential impacts, but its application is often hindered by the high computational cost of working with hundreds of thousands of complex wildfire scenarios. This study introduces a novel scenario reduction framework tailored to the unique characteristics of wildfire hazards, which often lack standard intensity metrics and exhibit highly nonlinear, spatially distributed behavior. The proposed framework selects a subset of scenarios that best represent the spatial and statistical diversity of the full dataset, thereby greatly reducing computational costs while accounting for uncertainties. This is achieved by mapping complex wildfire scenarios into a high-dimensional feature space, enabling similarity assessments based on spatial consequence patterns rather than standard intensity metrics. A k-medoids clustering approach is then used to identify a representative subset of scenarios, while an active-learning-based outlier selection procedure incorporates rare but high-impact events without inflating computational demands. The framework was first demonstrated using a simple illustrative example to show how its performance responds to different data characteristics. To further demonstrate the practicality of the framework, it was used for wildfire risk assessment in Spokane County, Washington, where the full dataset (1000 scenarios) was reduced to 41 representative scenarios while preserving the spatial patterns of burn probability and building damage with high fidelity. The results demonstrated that the framework significantly improves computational efficiency and accuracy compared to traditional scenario reduction methods, offering a scalable and flexible tool for probabilistic wildfire risk assessment. 
    more » « less