skip to main content


Title: Optimal Control and Directional Differentiability for Elliptic Quasi-Variational Inequalities
Abstract We focus on elliptic quasi-variational inequalities (QVIs) of obstacle type and prove a number of results on the existence of solutions, directional differentiability and optimal control of such QVIs. We give three existence theorems based on an order approach, an iteration scheme and a sequential regularisation through partial differential equations. We show that the solution map taking the source term into the set of solutions of the QVI is directionally differentiable for general data and locally Hadamard differentiable obstacle mappings, thereby extending in particular the results of our previous work which provided the first differentiability result for QVIs in infinite dimensions. Optimal control problems with QVI constraints are also considered and we derive various forms of stationarity conditions for control problems, thus supplying among the first such results in this area.  more » « less
Award ID(s):
2012391
NSF-PAR ID:
10253628
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Set-Valued and Variational Analysis
ISSN:
1877-0533
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we consider the optimal control of semilinear fractional PDEs with both spectral and integral fractional diffusion operators of order 2 s with s ∈ (0, 1). We first prove the boundedness of solutions to both semilinear fractional PDEs under minimal regularity assumptions on domain and data. We next introduce an optimal growth condition on the nonlinearity to show the Lipschitz continuity of the solution map for the semilinear elliptic equations with respect to the data. We further apply our ideas to show existence of solutions to optimal control problems with semilinear fractional equations as constraints. Under the standard assumptions on the nonlinearity (twice continuously differentiable) we derive the first and second order optimality conditions. 
    more » « less
  2. We consider optimal control of fractional in time (subdiffusive, i.e., for \begin{document}$ 0<\gamma <1 $\end{document}) semilinear parabolic PDEs associated with various notions of diffusion operators in an unifying fashion. Under general assumptions on the nonlinearity we \begin{document}$\mathsf{first\;show}$\end{document} the existence and regularity of solutions to the forward and the associated \begin{document}$\mathsf{backward\;(adjoint)}$\end{document} problems. In the second part, we prove existence of optimal \begin{document}$\mathsf{controls }$\end{document} and characterize the associated \begin{document}$\mathsf{first\;order}$\end{document} optimality conditions. Several examples involving fractional in time (and some fractional in space diffusion) equations are described in detail. The most challenging obstacle we overcome is the failure of the semigroup property for the semilinear problem in any scaling of (frequency-domain) Hilbert spaces.

     
    more » « less
  3. Tasks across diverse application domains can be posed as large-scale optimization problems, these include graphics, vision, machine learning, imaging, health, scheduling, planning, and energy system forecasting. Independently of the application domain, proximal algorithms have emerged as a formal optimization method that successfully solves a wide array of existing problems, often exploiting problem-specific structures in the optimization. Although model-based formal optimization provides a principled approach to problem modeling with convergence guarantees, at first glance, this seems to be at odds with black-box deep learning methods. A recent line of work shows that, when combined with learning-based ingredients, model-based optimization methods are effective, interpretable, and allow for generalization to a wide spectrum of applications with little or no extra training data. However, experimenting with such hybrid approaches for different tasks by hand requires domain expertise in both proximal optimization and deep learning, which is often error-prone and time-consuming. Moreover, naively unrolling these iterative methods produces lengthy compute graphs, which when differentiated via autograd techniques results in exploding memory consumption, making batch-based training challenging. In this work, we introduce ∇-Prox, a domain-specific modeling language and compiler for large-scale optimization problems using differentiable proximal algorithms. ∇-Prox allows users to specify optimization objective functions of unknowns concisely at a high level, and intelligently compiles the problem into compute and memory-efficient differentiable solvers. One of the core features of ∇-Prox is its full differentiability, which supports hybrid model- and learning-based solvers integrating proximal optimization with neural network pipelines. Example applications of this methodology include learning-based priors and/or sample-dependent inner-loop optimization schedulers, learned with deep equilibrium learning or deep reinforcement learning. With a few lines of code, we show ∇-Prox can generate performant solvers for a range of image optimization problems, including end-to-end computational optics, image deraining, and compressive magnetic resonance imaging. We also demonstrate ∇-Prox can be used in a completely orthogonal application domain of energy system planning, an essential task in the energy crisis and the clean energy transition, where it outperforms state-of-the-art CVXPY and commercial Gurobi solvers. 
    more » « less
  4. In this paper we study the existence, the optimal regularity of solutions, and the regularity of the free boundary near the so-called \emph{regular points} in a thin obstacle problem that arises as the local extension of the obstacle problem for the fractional heat operator $(\partial_t - \Delta_x)^s$ for $s \in (0,1)$. Our regularity estimates are completely local in nature. This aspect is of crucial importance in our forthcoming work on the blowup analysis of the free boundary, including the study of the singular set. Our approach is based on first establishing the boundedness of the time-derivative of the solution. This allows reduction to an elliptic problem at every fixed time level. Using several results from the elliptic theory, including the epiperimetric inequality, we establish the optimal regularity of solutions as well as $H^{1+\gamma,\frac{1+\gamma}{2}}$ regularity of the free boundary near such regular points. 
    more » « less
  5. We analyze online (Bottou & Bengio, 1994) and mini-batch (Sculley, 2010) k-means variants. Both scale up the widely used Lloyd’s algorithm via stochastic approximation, and have become popular for large-scale clustering and unsupervised feature learning. We show, for the first time, that they have global convergence towards “local optima” at rate O(1/t) under general conditions. In addition, we show that if the dataset is clusterable, stochastic k-means with suitable initialization converges to an optimal k-means solution at rate O(1/t) with high probability. The k-means objective is non-convex and non-differentiable; we exploit ideas from non-convex gradient-based optimization by providing a novel characterization of the trajectory of the k-means algorithm on its solution space, and circumvent its non-differentiability via geometric insights about the k-means update. 
    more » « less