skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimal stopping with expectation constraints
We analyze an optimal stopping problem with a series of inequality-type and equality-type expectation constraints in a general non-Markovian framework. We show that the optimal stopping problem with expectation constraints (OSEC) in an arbitrary probability setting is equivalent to the constrained problem in weak formulation (an optimization over joint laws of stopping rules with Brownian motion and state dynamics on an enlarged canonical space), and thus the OSEC value is independent of a specific probabilistic setup. Using a martingale-problem formulation, we make an equivalent characterization of the probability classes in weak formulation, which implies that the OSEC value function is upper semianalytic. Then we exploit a measurable selection argument to establish a dynamic programming principle in weak formulation for the OSEC value function, in which the conditional expected costs act as additional states for constraint levels at the intermediate horizon.  more » « less
Award ID(s):
2106556
PAR ID:
10535767
Author(s) / Creator(s):
;
Publisher / Repository:
IMS
Date Published:
Journal Name:
The Annals of Applied Probability
Volume:
34
Issue:
1B
ISSN:
1050-5164
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study a stochastic control/stopping problem with a series of inequality-type and equality-type expectation constraints in a general non-Markovian framework. We demonstrate that the stochastic control/stopping problem with expectation constraints (CSEC) is independent of a specific probability setting and is equivalent to the constrained stochastic control/stopping problem in weak formulation (an optimization over joint laws of Brownian motion, state dynamics, diffusion controls and stopping rules on an enlarged canonical space). Using a martingale-problem formulation of controlled SDEs in spirit of Stroock and Varadhan (2006), we characterize the probability classes in weak formulation by countably many actions of canonical processes, and thus obtain the upper semi-analyticity of the CSEC value function. Then we employ a measurable selection argument to establish a dynamic programming principle (DPP) in weak formulation for the CSEC value function, in which the conditional expected costs act as additional states for constraint levels at the intermediate horizon. This article extends (El Karoui and Tan, 2013) to the expectation-constraint case. We extend our previous work (Bayraktar and Yao, 2024) to the more complicated setting where the diffusion is controlled. Compared to that paper the topological properties of diffusion-control spaces and the corresponding measurability are more technically involved which complicate the arguments especially for the measurable selection for the super-solution side of DPP in the weak formulation. 
    more » « less
  2. In a chance constrained program (CCP), decision makers seek the best decision whose probability of violating the uncertainty constraints is within the prespecified risk level. As a CCP is often nonconvex and is difficult to solve to optimality, much effort has been devoted to developing convex inner approximations for a CCP, among which the conditional value-at-risk (CVaR) has been known to be the best for more than a decade. This paper studies and generalizes the ALSO-X, originally proposed by Ahmed, Luedtke, SOng, and Xie in 2017 , for solving a CCP. We first show that the ALSO-X resembles a bilevel optimization, where the upper-level problem is to find the best objective function value and enforce the feasibility of a CCP for a given decision from the lower-level problem, and the lower-level problem is to minimize the expectation of constraint violations subject to the upper bound of the objective function value provided by the upper-level problem. This interpretation motivates us to prove that when uncertain constraints are convex in the decision variables, ALSO-X always outperforms the CVaR approximation. We further show (i) sufficient conditions under which ALSO-X can recover an optimal solution to a CCP; (ii) an equivalent bilinear programming formulation of a CCP, inspiring us to enhance ALSO-X with a convergent alternating minimization method (ALSO-X+); and (iii) an extension of ALSO-X and ALSO-X+ to distributionally robust chance constrained programs (DRCCPs) under the ∞−Wasserstein ambiguity set. Our numerical study demonstrates the effectiveness of the proposed methods. 
    more » « less
  3. This article explores various uncertain control co-design (UCCD) problem formulations. While previous work offers formulations that are method-dependent and limited to only a handful of uncertainties (often from one discipline), effective application of UCCD to real-world dynamic systems requires a thorough understanding of uncertainties and how their impact can be captured. Since the first step is defining the UCCD problem of interest, this article aims at addressing some of the limitations of the current literature by identifying possible sources of uncertainties in a general UCCD context and then formalizing ways in which their impact is captured through problem formulation alone (without having to immediately resort to specific solution strategies). We first develop and then discuss a generalized UCCD formulation that can capture uncertainty representations presented in this article. Issues such as the treatment of the objective function, the challenge of the analysis-type equality constraints, and various formulations for inequality constraints are discussed. Then, more specialized problem formulations such as stochastic in expectation, stochastic chance-constrained, probabilistic robust, worst-case robust, fuzzy expected value, and possibilistic chance-constrained UCCD formulations are presented. Key concepts from these formulations, along with insights from closely-related fields, such as robust and stochastic control theory, are discussed, and future research directions are identified. 
    more » « less
  4. Bayesian Networks (BNs) represent conditional probability relations among a set of random variables (nodes) in the form of a directed acyclic graph (DAG), and have found diverse applications in knowledge discovery. We study the problem of learning the sparse DAG structure of a BN from continuous observational data. The central problem can be modeled as a mixed-integer program with an objective function composed of a convex quadratic loss function and a regularization penalty subject to linear constraints. The optimal solution to this mathematical program is known to have desirable statistical properties under certain conditions. However, the state-of-the-art optimization solvers are not able to obtain provably optimal solutions to the existing mathematical formulations for medium-size problems within reasonable computational times. To address this difficulty, we tackle the problem from both computational and statistical perspectives. On the one hand, we propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution to the mixed-integer program, and establish the consistency of this approximate solution. On the other hand, we improve the existing formulations by replacing the linear “big-M " constraints that represent the relationship between the continuous and binary indicator variables with second-order conic constraints. Our numerical results demonstrate the effectiveness of the proposed approaches. 
    more » « less
  5. A finite horizon nonlinear optimal control problem is considered for which the associated Hamiltonian satisfies a uniform semiconcavity property with respect to its state and costate variables. It is shown that the value function for this optimal control problem is equivalent to the value of a min-max game, provided the time horizon considered is sufficiently short. This further reduces to maximization of a linear functional over a convex set. It is further proposed that the min-max game can be relaxed to a type of stat (stationary) game, in which no time horizon constraint is involved. 
    more » « less