Quantum computation shows promise for addressing numerous classically intractable problems, such as optimization tasks. Many optimization problems are NP-hard, meaning that they scale exponentially with problem size and thus cannot be addressed at scale by traditional computing paradigms. The recently proposed quantum algorithm arXiv:2206.14999 addresses this challenge for some NP-hard problems, and is based on classical semidefinite programming (SDP). In this manuscript, we generalize the SDP-inspired quantum algorithm to sum-of-squares programming, which targets a broader problem set. Our proposed algorithm addresses degree- polynomial optimization problems with variables (which are representative of many NP-hard problems) using qubits, quantum measurements, and classical calculations. We apply the proposed algorithm to the prototypical Max-SAT problem and compare its performance against classical sum-of-squares, state-of-the-art heuristic solvers, and random guessing. Simulations show that the performance of our algorithm surpasses that of classical sum-of-squares after rounding. Our results further demonstrate that our algorithm is suitable for large problems and approximates the best known classical heuristics, while also providing a more generalizable approach compared to problem-specific heuristics.
more »
« less
Algorithms for Automatic Ranking of Participants and Tasks in an Anonymized Contest
We introduce a new set of problems based on the Chain Editing problem. In our version of Chain Editing, we are given a set of anonymous participants and a set of undisclosed tasks that every participant attempts. For each participant-task pair, we know whether the participant has succeeded at the task or not. We assume that participants vary in their ability to solve tasks, and that tasks vary in their difficulty to be solved. In an ideal world, stronger participants should succeed at a superset of tasks that weaker participants succeed at. Similarly, easier tasks should be completed successfully by a superset of participants who succeed at harder tasks. In reality, it can happen that a stronger participant fails at a task that a weaker participants succeeds at. Our goal is to find a perfect nesting of the participant-task relations by flipping a minimum number of participant-task relations, implying such a “nearest perfect ordering” to be the one that is closest to the truth of participant strengths and task difficulties. Many variants of the problem are known to be NP-hard. We propose six natural k-near versions of the Chain Editing problem and classify their complexity. The input to a k-near Chain Editing problem includes an initial ordering of the participants (or tasks) that we are required to respect by moving each participant (or task) at most k positions from the initial ordering. We obtain surprising results on the complexity of the six k-near problems: Five of the problems are polynomial-time solvable using dynamic programming, but one of them is NP-hard.
more »
« less
- Award ID(s):
- 1655442
- PAR ID:
- 10028125
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- Walcom
- ISSN:
- 1374-5379
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Given a linear dynamical system, we consider the problem of selecting (at design-time) an optimal set of sensors (subject to certain budget constraints) to minimize the trace of the steady state error covariance matrix of the Kalman filter. Previous work has shown that this problem is NP-hard for certain classes of systems and sensor costs; in this paper, we show that the problem remains NP-hard even for the special case where the system is stable and all sensor costs are identical. Furthermore, we show the stronger result that there is no constant-factor (polynomial-time) approximation algorithm for this problem. This contrasts with other classes of sensor selection problems studied in the literature, which typically pursue constant-factor approximations by leveraging greedy algorithms and submodularity of the cost function. Here, we provide a specific example showing that greedy algorithms can perform arbitrarily poorly for the problem of design-time sensor selection for Kalman filtering.more » « less
-
In the certification problem, the algorithm is given a function f with certificate complexity k and an input x^⋆, and the goal is to find a certificate of size ≤ poly(k) for f’s value at x^⋆. This problem is in NP^NP, and assuming 𝖯 ≠ NP, is not in 𝖯. Prior works, dating back to Valiant in 1984, have therefore sought to design efficient algorithms by imposing assumptions on f such as monotonicity. Our first result is a BPP^NP algorithm for the general problem. The key ingredient is a new notion of the balanced influence of variables, a natural variant of influence that corrects for the bias of the function. Balanced influences can be accurately estimated via uniform generation, and classic BPP^NP algorithms are known for the latter task. We then consider certification with stricter instance-wise guarantees: for each x^⋆, find a certificate whose size scales with that of the smallest certificate for x^⋆. In sharp contrast with our first result, we show that this problem is NP^NP-hard even to approximate. We obtain an optimal inapproximability ratio, adding to a small handful of problems in the higher levels of the polynomial hierarchy for which optimal inapproximability is known. Our proof involves the novel use of bit-fixing dispersers for gap amplification.more » « less
-
Optimization problems are ubiquitous in our societies and are present in almost every segment of the economy. Most of these optimization problems are NP-hard and computationally demanding, often requiring approximate solutions for large-scale instances. Machine learning frameworks that learn to approximate solutions to such hard optimization problems are a potentially promising avenue to address these difficulties, particularly when many closely related problem instances must be solved repeatedly. Supervised learning frameworks can train a model using the outputs of pre-solved instances. However, when the outputs are themselves approximations, when the optimization problem has symmetric solutions, and/or when the solver uses randomization, solutions to closely related instances may exhibit large differences and the learning task can become inherently more difficult. This paper demonstrates this critical challenge, connects the volatility of the training data to the ability of a model to approximate it, and proposes a method for producing (exact or approximate) solutions to optimization problems that are more amenable to supervised learning tasks. The effectiveness of the method is tested on hard non-linear nonconvex and discrete combinatorial problems.more » « less
-
We analyze the run-time complexity of computing allocations that are both fair and maximize the utilitarian social welfare, defined as the sum of agents’ utilities. We focus on two tractable fairness concepts: envy-freeness up to one item (EF1) and proportionality up to one item (PROP1). We consider two computational problems: (1) Among the utilitarian-maximal allocations, decide whether there exists one that is also fair; (2) among the fair allocations, compute one that maximizes the utilitarian welfare. We show that both problems are strongly NP-hard when the number of agents is variable, and remain NP-hard for a fixed number of agents greater than two. For the special case of two agents, we find that problem (1) is polynomial-time solvable, while problem (2) remains NP-hard. Finally, with a fixed number of agents, we design pseudopolynomial-time algorithms for both problems. We extend our results to the stronger fairness notions envy-freeness up to any item (EFx) and proportionality up to any item (PROPx).more » « less
An official website of the United States government

