Quantum Approximate Optimization algorithm (QAOA) aims to search for approximate solutions to discrete optimization problems with nearterm quantum computers. As there are no algorithmic guarantee possible for QAOA to outperform classical computers, without a proof that boundederror quantum polynomial time (BQP) ≠ nondeterministic polynomial time (NP), it is necessary to investigate the empirical advantages of QAOA. We identify a computational phase transition of QAOA when solving hard problems such as SAT—random instances are most difficult to train at a critical problem density. We connect the transition to the controllability and the complexity of QAOA circuits. Moreover, we find that the critical problem density in general deviates from the SATUNSAT phase transition, where the hardest instances for classical algorithms lies. Then, we show that the high problem density region, which limits QAOA’s performance in hard optimization problems (reachability deficits), is actually a good place to utilize QAOA: its approximation ratio has a much slower decay with the problem density, compared to classical approximate algorithms. Indeed, it is exactly in this region that quantum advantages of QAOA over classical approximate algorithms can be identified.
This content will become publicly available on December 11, 2024
On the approximability of randomhypergraph MAX3XORSAT problems with quantum algorithms
Constraint satisfaction problems are an important area of computer science. Many of these problems are in the complexity class NP which is exponentially hard for all known methods, both for worst cases and often typical. Fundamentally, the lack of any guided local minimum escape method ensures the hardness of both exact and approximate optimization classically, but the intuitive mechanism for approximation hardness in quantum algorithms based on Hamiltonian time evolution is poorly understood. We explore this question using the prototypically hard MAX3XORSAT problem class. We conclude that the mechanisms for quantum exact and approximation hardness are fundamentally distinct. We qualitatively identify why traditional methods such as quantum adiabatic optimization are not good approximation algorithms. We propose a new spectral folding optimization method that does not suffer from these issues and study it analytically and numerically. We consider random rank3 hypergraphs including extremal planted solution instances, where the ground state satisfies an anomalously high fraction of constraints compared to truly random problems. We show that, if we define the energy to be E=Nunsat−Nsat, then spectrally folded quantum optimization will return states with energy E≤AEGS (where EGS is the ground state energy) in polynomial time, where conservatively, A≃0.6. We thoroughly benchmark variations of spectrally folded quantum optimization for random classically approximationhard (planted solution) instances in simulation, and find performance consistent with this prediction. We do not claim that this approximation guarantee holds for all possible hypergraphs, though our algorithm's mechanism can likely generalize widely. These results suggest that quantum computers are more powerful for approximate optimization than had been previously assumed.
more »
« less
 Award ID(s):
 1839232
 NSFPAR ID:
 10488288
 Publisher / Repository:
 arXiv.org
 Date Published:
 Journal Name:
 arXivorg
 ISSN:
 23318422
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract 
null (Ed.)Vehicle routing problems are a broad class of combinatorial optimization problems that can be formulated as the problem of finding a tour in a weighted graph that optimizes some function of the visited vertices. For instance, a canonical and extensively studied vehicle routing problem is the orienteering problem where the goal is to find a tour that maximizes the number of vertices visited by a given deadline. In this paper, we consider the computational tractability of a wellknown generalization of the orienteering problem called the OrientMTW problem. The input to OrientMTW consists of a weighted graph G(V, E) where for each vertex v ∊ V we are given a set of time instants Tv ⊆ [T], and a source vertex s. A tour starting at s is said to visit a vertex v if it transits through v at any time in the set Tv. The goal is to find a tour starting at the source vertex that maximizes the number of vertices visited. It is known that this problem admits a quasipolynomial time O(log OPT)approximation ratio where OPT is the optimal solution value but until now no hardness better than an APXhardness was known for this problem. Our main result is an hardness for this problem that holds even when the underlying graph G is an undirected tree. This is the first superconstant hardness result for the OrientMTW problem. The starting point for our result is the hardness of the SetCover problem which is known to hold on instances with a special structure. We exploit this special structure of the hard SetCover instances to first obtain a new proof of the APXhardness result for OrientMTW that holds even on trees of depth 2. We then recursively amplify this constant factor hardness to an hardness, while keeping the resulting topology to be a tree. Our amplified hardness proof crucially utilizes a delicate concavity property which shows that in our encoding of SetCover instances as instances of the OrientMTW problem, whenever the optimal cost for SetCover instance is large, any tour, no matter how it allocates its time across different subtrees, can not visit too many vertices overall. We believe that this reduction template may also prove useful in showing hardness of other vehicle routing problems.more » « less

We study robust convex quadratic programs where the uncertain problem parameters can contain both continuous and integer components. Under the natural boundedness assumption on the uncertainty set, we show that the generic problems are amenable to exact copositive programming reformulations of polynomial size. These convex optimization problems are NPhard but admit a conservative semidefinite programming (SDP) approximation that can be solved efficiently. We prove that the popular approximate Slemma method—which is valid only in the case of continuous uncertainty—is weaker than our approximation. We also show that all results can be extended to the twostage robust quadratic optimization setting if the problem has complete recourse. We assess the effectiveness of our proposed SDP reformulations and demonstrate their superiority over the stateoftheart solution schemes on instances of least squares, project management, and multiitem newsvendor problems.more » « less

In recent years several compressed indexes based on variants of the BurrowsWheeler transformation have been introduced. Some of these are used to index structures far more complex than a single string, as was originally done with the FMindex [Ferragina and Manzini, J. ACM 2005]. As such, there has been an increasing effort to better understand under which conditions such an indexing scheme is possible. This has led to the introduction of Wheeler graphs [Gagie et al., Theor. Comput. Sci., 2017]. Gagie et al. showed that de Bruijn graphs, generalized compressed suffix arrays, and several other BWT related structures can be represented as Wheeler graphs and that Wheeler graphs can be indexed in a way which is spaceefficient. Hence, being able to recognize whether a given graph is a Wheeler graph, or being able to approximate a given graph by a Wheeler graph, could have numerous applications in indexing. Here we resolve the open question of whether there exists an efficient algorithm for recognizing if a given graph is a Wheeler graph. We present  The problem of recognizing whether a given graph G=(V,E) is a Wheeler graph is NPcomplete for any edge label alphabet of size sigma >= 2, even when G is a DAG. This holds even on a restricted, subset of graphs called dNFA's for d >= 5. This is in contrast to recent results demonstrating the problem can be solved in polynomial time for dNFA's where d <= 2. We also show the recognition problem can be solved in linear time for sigma =1;  There exists an 2^{e log sigma + O(n + e)} time exact algorithm where n = V and e = E. This algorithm relies on graph isomorphism being computable in strictly subexponential time;  We define an optimization variant of the problem called Wheeler Graph Violation, abbreviated WGV, where the aim is to remove the minimum number of edges in order to obtain a Wheeler graph. We show WGV is APXhard, even when G is a DAG, implying there exists a constant C >= 1 for which there is no Capproximation algorithm (unless P = NP). Also, conditioned on the Unique Games Conjecture, for all C >= 1, it is NPhard to find a Capproximation;  We define the Wheeler Subgraph problem, abbreviated WS, where the aim is to find the largest subgraph which is a Wheeler Graph (the dual of the WGV). In contrast to WGV, we prove that the WS problem is in APX for sigma=O(1); The above findings suggest that most problems under this theme are computationally difficult. However, we identify a class of graphs for which the recognition problem is polynomialtime solvable, raising the open question of which parameters determine this problem's difficulty.more » « less

Realizing quantum speedup for practically relevant, computationally hard problems is a central challenge in quantum information science. Using Rydberg atom arrays with up to 289 qubits in two spatial dimensions, we experimentally investigate quantum algorithms for solving the Maximum Independent Set problem. We use a hardwareefficient encoding associated with Rydberg blockade, realize closedloop optimization to test several variational algorithms, and subsequently apply them to systematically explore a class of graphs with programmable connectivity. We find the problem hardness is controlled by the solution degeneracy and number of local minima, and experimentally benchmark the quantum algorithm’s performance against classical simulated annealing. On the hardest graphs, we observe a superlinear quantum speedup in finding exact solutions in the deep circuit regime and analyze its origins.more » « less