skip to main content


Title: On the numerical solution of nonlinear eigenvalue problems for the Monge-Ampère operator
In this article, we report the results we obtained when investigating the numerical solution of some nonlinear eigenvalue problems for the Monge-Ampère operator v → det D 2 v . The methodology we employ relies on the following ingredients: (i) a divergence formulation of the eigenvalue problems under consideration. (ii) The time discretization by operator-splitting of an initial value problem (a kind of gradient flow) associated with each eigenvalue problem. (iii) A finite element approximation relying on spaces of continuous piecewise affine functions. To validate the above methodology, we applied it to the solution of problems with known exact solutions: The results we obtained suggest convergence to the exact solution when the space discretization step h → 0. We considered also test problems with no known exact solutions.  more » « less
Award ID(s):
2012046
NSF-PAR ID:
10278869
Author(s) / Creator(s):
; ; ;
Editor(s):
Buttazzo, G.; Casas, E.; de Teresa, L.; Glowinsk, R.; Leugering, G.; Trélat, E.; Zhang, X.
Date Published:
Journal Name:
ESAIM: Control, Optimisation and Calculus of Variations
Volume:
26
ISSN:
1292-8119
Page Range / eLocation ID:
118
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The “asymmetry” between spatiotemporally varying passenger demand and fixed-capacity transportation supply has been a long-standing problem in urban mass transportation (UMT) systems around the world. The emerging modular autonomous vehicle (MAV) technology offers us an opportunity to close the substantial gap between passenger demand and vehicle capacity through station-wise docking and undocking operations. However, there still lacks an appropriate approach that can solve the operational design problem for UMT corridor systems with MAVs efficiently. To bridge this methodological gap, this paper proposes a continuum approximation (CA) model that can offer near-optimal solutions to the operational design for MAV-based transit corridors very efficiently. We investigate the theoretical properties of the optimal solutions to the investigated problem in a certain (yet not uncommon) case. These theoretical properties allow us to estimate the seat demand of each time neighborhood with the arrival demand curves, which recover the “local impact” property of the investigated problem. With the property, a CA model is properly formulated to decompose the original problem into a finite number of subproblems that can be analytically solved. A discretization heuristic is then proposed to convert the analytical solution from the CA model to feasible solutions to the original problem. With two sets of numerical experiments, we show that the proposed CA model can achieve near-optimal solutions (with gaps less than 4% for most cases) to the investigated problem in almost no time (less than 10 ms) for large-scale instances with a wide range of parameter settings (a commercial solver may even not obtain a feasible solution in several hours). The theoretical properties are verified, and managerial insights regarding how input parameters affect system performance are provided through these numerical results. Additionally, results also reveal that, although the CA model does not incorporate vehicle repositioning decisions, the timetabling decisions obtained by solving the CA model can be easily applied to obtain near-optimal repositioning decisions (with gaps less than 5% in most instances) very efficiently (within 10 ms). Thus, the proposed CA model provides a foundation for developing solution approaches for other problems (e.g., MAV repositioning) with more complex system operation constraints whose exact optimal solution can hardly be found with discrete modeling methods. 
    more » « less
  2. Given a Boolean formula ϕ(x) in conjunctive normal form (CNF), the density of states counts the number of variable assignments that violate exactly e clauses, for all values of e. Thus, the density of states is a histogram of the number of unsatisfied clauses over all possible assignments. This computation generalizes both maximum-satisfiability (MAX-SAT) and model counting problems and not only provides insight into the entire solution space, but also yields a measure for the hardness of the problem instance. Consequently, in real-world scenarios, this problem is typically infeasible even when using state-of-the-art algorithms. While finding an exact answer to this problem is a computationally intensive task, we propose a novel approach for estimating density of states based on the concentration of measure inequalities. The methodology results in a quadratic unconstrained binary optimization (QUBO), which is particularly amenable to quantum annealing-based solutions. We present the overall approach and compare results from the D-Wave quantum annealer against the best-known classical algorithms such as the Hamze-de Freitas-Selby (HFS) algorithm and satisfiability modulo theory (SMT) solvers. 
    more » « less
  3. Newly, there has been significant research interest in the exact solution of the AC optimal power flow (AC-OPF) problem. A semideflnite relaxation solves many OPF problems globally. However, the real problem exists in which the semidefinite relaxation fails to yield the global solution. The appropriation of relaxation for AC-OPF depends on the success or unfulflllment of the SDP relaxation. This paper demonstrates a quadratic AC-OPF problem with a single negative eigenvalue in objective function subject to linear and conic constraints. The proposed solution method for AC-OPF model covers the classical AC economic dispatch problem that is known to be NP-hard. In this paper, by combining successive linear conic optimization (SLCO), convex relaxation and line search technique, we present a global algorithm for AC-OPF which can locate a globally optimal solution to the underlying AC-OPF within given tolerance of global optimum solution via solving linear conic optimization problems. The proposed algorithm is examined on modified IEEE 6-bus test system. The promising numerical results are described. 
    more » « less
  4. Braverman, Mark (Ed.)
    Grothendieck’s inequality [Grothendieck, 1953] states that there is an absolute constant K > 1 such that for any n× n matrix A, ‖A‖_{∞→1} := max_{s,t ∈ {± 1}ⁿ}∑_{i,j} A[i,j]⋅s(i)⋅t(j) ≥ 1/K ⋅ max_{u_i,v_j ∈ S^{n-1}}∑_{i,j} A[i,j]⋅⟨u_i,v_j⟩. In addition to having a tremendous impact on Banach space theory, this inequality has found applications in several unrelated fields like quantum information, regularity partitioning, communication complexity, etc. Let K_G (known as Grothendieck’s constant) denote the smallest constant K above. Grothendieck’s inequality implies that a natural semidefinite programming relaxation obtains a constant factor approximation to ‖A‖_{∞ → 1}. The exact value of K_G is yet unknown with the best lower bound (1.67…) being due to Reeds and the best upper bound (1.78…) being due to Braverman, Makarychev, Makarychev and Naor [Braverman et al., 2013]. In contrast, the little Grothendieck inequality states that under the assumption that A is PSD the constant K above can be improved to π/2 and moreover this is tight. The inapproximability of ‖A‖_{∞ → 1} has been studied in several papers culminating in a tight UGC-based hardness result due to Raghavendra and Steurer (remarkably they achieve this without knowing the value of K_G). Briet, Regev and Saket [Briët et al., 2015] proved tight NP-hardness of approximating the little Grothendieck problem within π/2, based on a framework by Guruswami, Raghavendra, Saket and Wu [Guruswami et al., 2016] for bypassing UGC for geometric problems. This also remained the best known NP-hardness for the general Grothendieck problem due to the nature of the Guruswami et al. framework, which utilized a projection operator onto the degree-1 Fourier coefficients of long code encodings, which naturally yielded a PSD matrix A. We show how to extend the above framework to go beyond the degree-1 Fourier coefficients, using the global structure of optimal solutions to the Grothendieck problem. As a result, we obtain a separation between the NP-hardness results for the two problems, obtaining an inapproximability result for the Grothendieck problem, of a factor π/2 + ε₀ for a fixed constant ε₀ > 0. 
    more » « less
  5. Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in Binev et al. ( SIAM J. Math. Anal. 43 (2011) 1457–1472) and DeVore et al. ( Constructive Approximation 37 (2013) 455–466), where it is shown that the reduced basis space V n of dimension n , constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov n -width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite training set obtained through a discretization of the parameter domain. To guarantee a final approximation error ε for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy of order ε . Hence, the size of the training set is the ε covering number for M and this covering number typically behaves like exp( Cε −1/s ) for some C  > 0 when the solution manifold has n -width decay O ( n −s ). Thus, the shear size of the training set prohibits implementation of the algorithm when ε is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in ε −1 . Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions. 
    more » « less