skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: Fast Connectivity Minimization on Large-Scale Networks
The connectivity of networks has been widely studied in many high-impact applications, ranging from immunization, critical infrastructure analysis, social network mining, to bioinformatic system studies. Regardless of the end application domains, connectivity minimization has always been a fundamental task to effectively control the functioning of the underlying system. The combinatorial nature of the connectivity minimization problem imposes an exponential computational complexity to find the optimal solution, which is intractable in large systems. To tackle the computational barrier, greedy algorithm is extensively used to ensure a near-optimal solution by exploiting the diminishing returns property of the problem. Despite the empirical success, the theoretical and algorithmic challenges of the problems still remain wide open. On the theoretical side, the intrinsic hardness and the approximability of the general connectivity minimization problem are still unknown except for a few special cases. On the algorithmic side, existing algorithms are hard to balance between the optimization quality and computational efficiency. In this article, we address the two challenges by (1) proving that the general connectivity minimization problem is NP-hard and is the best approximation ratio for any polynomial algorithms, and (2) proposing the algorithm CONTAIN and its variant CONTAIN + that can well balance optimization effectiveness and computational efficiency for eigen-function based connectivity minimization problems in large networks.  more » « less
Award ID(s):
1939725 1947135
PAR ID:
10232457
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ACM Transactions on Knowledge Discovery from Data
Volume:
15
Issue:
3
ISSN:
1556-4681
Page Range / eLocation ID:
1 to 25
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Network connectivity optimization, which aims to manipulate network connectivity by changing its underlying topology, is a fundamental task behind a wealth of high-impact data mining applications, ranging from immunization, critical infrastructure construction, social collaboration mining, bioinformatics analysis, to intelligent transportation system design. To tackle its exponential computation complexity, greedy algorithms have been extensively used for network connectivity optimization by exploiting its diminishing returns property. Despite the empirical success, two key challenges largely remain open. First, on the theoretic side, the hardness, as well as the approximability of the general network connectivity optimization problem are still nascent except for a few special instances. Second, on the algorithmic side, current algorithms are often hard to balance between the optimization quality and the computational efficiency. In this paper, we systematically address these two challenges for the network connectivity optimization problem. First, we reveal some fundamental limits by proving that, for a wide range of network connectivity optimization problems, (1) they are NP-hard and (2) (1-1/e) is the optimal approximation ratio for any polynomial algorithms. Second, we propose an effective, scalable and general algorithm (CONTAIN) to carefully balance the optimization quality and the computational efficiency. 
    more » « less
  2. Abstract

    Quantum Approximate Optimization algorithm (QAOA) aims to search for approximate solutions to discrete optimization problems with near-term quantum computers. As there are no algorithmic guarantee possible for QAOA to outperform classical computers, without a proof that bounded-error quantum polynomial time (BQP) ≠ nondeterministic polynomial time (NP), it is necessary to investigate the empirical advantages of QAOA. We identify a computational phase transition of QAOA when solving hard problems such as SAT—random instances are most difficult to train at a critical problem density. We connect the transition to the controllability and the complexity of QAOA circuits. Moreover, we find that the critical problem density in general deviates from the SAT-UNSAT phase transition, where the hardest instances for classical algorithms lies. Then, we show that the high problem density region, which limits QAOA’s performance in hard optimization problems (reachability deficits), is actually a good place to utilize QAOA: its approximation ratio has a much slower decay with the problem density, compared to classical approximate algorithms. Indeed, it is exactly in this region that quantum advantages of QAOA over classical approximate algorithms can be identified.

     
    more » « less
  3. null (Ed.)
    Uncertainty is an omnipresent issue in real-world optimization problems. This paper studies a fundamental problem concerning uncertainty, known as the β-robust scheduling problem. Given a set of identical machines and a set of jobs whose processing times follow a normal distribution, the goal is to assign jobs to machines such that the probability that all the jobs are completed by a given common due date is maximized. We give the first systematic study on the complexity and algorithms for this problem. A strong negative result is shown by ruling out the existence of any polynomial-time algorithm with a constant approximation ratio for the general problem unless P=NP. On the positive side, we provide the first FPT-AS (fixed parameter tractable approximation scheme) parameterized by the number of different kinds of jobs, which is a common parameter in scheduling problems. It returns a solution arbitrarily close to the optimal solution, provided that the job processing times follow a few different types of distributions. We further complement the theoretical results by implementing our algorithm. The experiments demonstrate that by choosing an appropriate approximation ratio, the algorithm can efficiently compute a near-optimal solution. 
    more » « less
  4. This paper introduces Otti, a general-purpose com- piler for (zk)SNARKs that provides support for numerical op- timization problems. Otti produces efficient arithmetizations of programs that contain optimization problems including linear programming (LP), semi-definite programming (SDP), and a broad class of stochastic gradient descent (SGD) instances. Numerical optimization is a fundamental algorithmic building block: applications include scheduling and resource allocation tasks, approximations to NP-hard problems, and training of neural networks. Otti takes as input arbitrary programs written in a subset of C that contain optimization problems specified via an easy-to-use API. Otti then automatically produces rank-1 constraint satisfiability (R1CS) instances that express a succinct transformation of those programs. Correct execution of the transformed program implies the optimality of the solution to the original optimization problem. Our evaluation on real benchmarks shows that Otti, instantiated with the Spartan proof system, can prove the optimality of solutions in zero-knowledge in as little as 100 ms—over 4 orders of magnitude faster than existing approaches. 
    more » « less
  5. In this work we advance the understanding of the fundamental limits of computation for Binary Polynomial Optimization (BPO), which is the problem of maximizing a given polynomial function over all binary points. In our main result we provide a novel class of BPO that can be solved efficiently both from a theoretical and computational perspective. In fact, we give a strongly polynomial-time algorithm for instances whose corresponding hypergraph is β-acyclic. We note that the β-acyclicity assumption is natural in several applications including relational database schemes and the lifted multicut problem on trees. Due to the novelty of our proving technique, we obtain an algorithm which is interesting also from a practical viewpoint. This is because our algorithm is very simple to implement and the running time is a polynomial of very low degree in the number of nodes and edges of the hypergraph. Our result completely settles the computational complexity of BPO over acyclic hypergraphs, since the problem is NP-hard on α-acyclic instances. Our algorithm can also be applied to any general BPO problem that contains β-cycles. For these problems, the algorithm returns a smaller instance together with a rule to extend any optimal solution of the smaller instance to an optimal solution of the original instance. 
    more » « less