 Award ID(s):
 1910149
 NSFPAR ID:
 10348203
 Date Published:
 Journal Name:
 Theory of computing
 Volume:
 18
 ISSN:
 15572862
 Page Range / eLocation ID:
 118
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Megow, Nicole ; Smith, Adam (Ed.)Maximum weight independent set (MWIS) admits a 1/kapproximation in inductively kindependent graphs [Karhan Akcoglu et al., 2002; Ye and Borodin, 2012] and a 1/(2k)approximation in kperfectly orientable graphs [Kammer and Tholey, 2014]. These are a parameterized class of graphs that generalize kdegenerate graphs, chordal graphs, and intersection graphs of various geometric shapes such as intervals, pseudodisks, and several others [Ye and Borodin, 2012; Kammer and Tholey, 2014]. We consider a generalization of MWIS to a submodular objective. Given a graph G = (V,E) and a nonnegative submodular function f: 2^V → ℝ_+, the goal is to approximately solve max_{S ∈ ℐ_G} f(S) where ℐ_G is the set of independent sets of G. We obtain an Ω(1/k)approximation for this problem in the two mentioned graph classes. The first approach is via the multilinear relaxation framework and a simple contention resolution scheme, and this results in a randomized algorithm with approximation ratio at least 1/e(k+1). This approach also yields parallel (or lowadaptivity) approximations. Motivated by the goal of designing efficient and deterministic algorithms, we describe two other algorithms for inductively kindependent graphs that are inspired by work on streaming algorithms: a preemptive greedy algorithm and a primaldual algorithm. In addition to being simpler and faster, these algorithms, in the monotone submodular case, yield the first deterministic constant factor approximations for various special cases that have been previously considered such as intersection graphs of intervals, disks and pseudodisks.more » « less

Abstract We consider the problem of covering multiple submodular constraints. Given a finite ground set
N , a weight function ,$$w: N \rightarrow \mathbb {R}_+$$ $w:N\to {R}_{+}$r monotone submodular functions over$$f_1,f_2,\ldots ,f_r$$ ${f}_{1},{f}_{2},\dots ,{f}_{r}$N and requirements the goal is to find a minimum weight subset$$k_1,k_2,\ldots ,k_r$$ ${k}_{1},{k}_{2},\dots ,{k}_{r}$ such that$$S \subseteq N$$ $S\subseteq N$ for$$f_i(S) \ge k_i$$ ${f}_{i}\left(S\right)\ge {k}_{i}$ . We refer to this problem as$$1 \le i \le r$$ $1\le i\le r$MultiSubmodCover and it was recently considered by HarPeled and Jones (Few cuts meet many point sets. CoRR.arxiv:abs1808.03260 HarPeled and Jones 2018) who were motivated by an application in geometry. Even with$$r=1$$ $r=1$MultiSubmodCover generalizes the wellknown Submodular Set Cover problem (SubmodSC ), and it can also be easily reduced toSubmodSC . A simple greedy algorithm gives an approximation where$$O(\log (kr))$$ $O(log(kr\left)\right)$ and this ratio cannot be improved in the general case. In this paper, motivated by several concrete applications, we consider two ways to improve upon the approximation given by the greedy algorithm. First, we give a bicriteria approximation algorithm for$$k = \sum _i k_i$$ $k={\sum}_{i}{k}_{i}$MultiSubmodCover that covers each constraint to within a factor of while incurring an approximation of$$(11/e\varepsilon )$$ $(11/e\epsilon )$ in the cost. Second, we consider the special case when each$$O(\frac{1}{\epsilon }\log r)$$ $O(\frac{1}{\u03f5}logr)$ is a obtained from a truncated coverage function and obtain an algorithm that generalizes previous work on partial set cover ($$f_i$$ ${f}_{i}$PartialSC ), covering integer programs (CIPs ) and multiple vertex cover constraints Bera et al. (Theoret Comput Sci 555:2–8 Bera et al. 2014). Both these algorithms are based on mathematical programming relaxations that avoid the limitations of the greedy algorithm. We demonstrate the implications of our algorithms and related ideas to several applications ranging from geometric covering problems to clustering with outliers. Our work highlights the utility of the highlevel model and the lens of submodularity in addressing this class of covering problems. 
We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speedups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{10} + sqrt{n} epsilon^{12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{1}), with B an upper bound on the tracenorm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for lowrank Hamiltonians, given quantum states encoding these Hamiltonians, with a polylogarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest.more » « less

Chambers, Erin W. ; Gudmundsson, Joachim (Ed.)In SoCG 2022, Conroy and Tóth presented several constructions of sparse, lowhop spanners in geometric intersection graphs, including an O(nlog n)size 3hop spanner for n disks (or fat convex objects) in the plane, and an O(nlog² n)size 3hop spanner for n axisaligned rectangles in the plane. Their work left open two major questions: (i) can the size be made closer to linear by allowing larger constant stretch? and (ii) can nearlinear size be achieved for more general classes of intersection graphs? We address both questions simultaneously, by presenting new constructions of constanthop spanners that have almost linear size and that hold for a much larger class of intersection graphs. More precisely, we prove the existence of an O(1)hop spanner for arbitrary string graphs with O(nα_k(n)) size for any constant k, where α_k(n) denotes the kth function in the inverse Ackermann hierarchy. We similarly prove the existence of an O(1)hop spanner for intersection graphs of ddimensional fat objects with O(nα_k(n)) size for any constant k and d. We also improve on some of Conroy and Tóth’s specific previous results, in either the number of hops or the size: we describe an O(nlog n)size 2hop spanner for disks (or more generally objects with linear union complexity) in the plane, and an O(nlog n)size 3hop spanner for axisaligned rectangles in the plane. Our proofs are all simple, using separator theorems, recursion, shifted quadtrees, and shallow cuttings.more » « less

Abstract We show that a very simple pseudorandom generator fools intersections of k linear threshold functions (LTFs) and arbitrary functions of k LTFs over ndimensional Gaussian space. The two analyses of our PRG (for intersections versus arbitrary functions of LTFs) are quite different from each other and from previous analyses of PRGs for functions of halfspaces. Our analysis for arbitrary functions of LTFs establishes bounds on the Wasserstein distance between Gaussian random vectors with similar covariance matrices, and combines these bounds with a conversion from Wasserstein distance to "unionoforthants" distance from [Xi Chen et al., 2014]. Our analysis for intersections of LTFs uses extensions of the classical SudakovFernique type inequalities, which give bounds on the difference between the expectations of the maxima of two Gaussian random vectors with similar covariance matrices. For all values of k, our generator has seed length O(log n) + poly(k) for arbitrary functions of k LTFs and O(log n) + poly(log k) for intersections of k LTFs. The best previous result, due to [Gopalan et al., 2010], only gave such PRGs for arbitrary functions of k LTFs when k=O(log log n) and for intersections of k LTFs when k=O((log n)/(log log n)). Thus our PRG achieves an O(log n) seed length for values of k that are exponentially larger than previous work could achieve. By combining our PRG over Gaussian space with an invariance principle for arbitrary functions of LTFs and with a regularity lemma, we obtain a deterministic algorithm that approximately counts satisfying assignments of arbitrary functions of k general LTFs over {0,1}^n in time poly(n) * 2^{poly(k,1/epsilon)} for all values of k. This algorithm has a poly(n) runtime for k =(log n)^c for some absolute constant c>0, while the previous best poly(n)time algorithms could only handle k = O(log log n). For intersections of LTFs, by combining these tools with a recent PRG due to [R. O'Donnell et al., 2018], we obtain a deterministic algorithm that can approximately count satisfying assignments of intersections of k general LTFs over {0,1}^n in time poly(n) * 2^{poly(log k, 1/epsilon)}. This algorithm has a poly(n) runtime for k =2^{(log n)^c} for some absolute constant c>0, while the previous best poly(n)time algorithms for intersections of k LTFs, due to [Gopalan et al., 2010], could only handle k=O((log n)/(log log n)).more » « less