We study the classic set cover problem from the perspective of sub-linear algorithms. Given access to a collection of m sets over n elements in the query model, we show that sub-linear algorithms derived from existing techniques have almost tight query complexities.
On one hand, first we show an adaptation of the streaming algorithm presented in [17] to the sub-linear query model, that returns an α-approximate cover using Õ(m(n/k)^1/(α–1) + nk) queries to the input, where k denotes the value of a minimum set cover. We then complement this upper bound by proving that for lower values of k, the required number of queries is , even for estimating the optimal cover size. Moreover, we prove that even checking whether a given collection of sets covers all the elements would require Ω(nk) queries. These two lower bounds provide strong evidence that the upper bound is almost tight for certain values of the parameter k.
On the other hand, we show that this bound is not optimal for larger values of the parameter k, as there exists a (1 + ε)-approximation algorithm with Õ(mn/kε^2) queries. We show that this bound is essentially tight for sufficiently small constant ε, by establishing a lower bound of query complexity.
Our lower-bound results follow by carefully designing two distributions of instances that are hard to distinguish. In particular, our first lower bound involves a probabilistic construction of a certain set system with a minimum set cover of size αk, with the key property that a small number of “almost uniformly distributed” modifications can reduce the minimum set cover size down to k. Thus, these modifications are not detectable unless a large number of queries are asked. We believe that our probabilistic construction technique might find applications to lower bounds for other combinatorial optimization problems.
more »
« less
Two-stage linear decision rules for multi-stage stochastic programming
Multi-stage stochastic linear programs (MSLPs) are notoriously hard to solve in general. Linear decision rules (LDRs) yield an approximation of an MSLP by restricting the decisions at each stage to be an affine function of the observed uncertain parameters. Finding an optimal LDR is a static optimization problem that provides an upper bound on the optimal value of the MSLP, and, under certain assumptions, can be formulated as an explicit linear program. Similarly, as proposed by Kuhn et al. (Math Program 130(1):177–209, 2011) a lower bound for an MSLP can be obtained by restricting decisions in the dual of the MSLP to follow an LDR. We propose a new approximation approach for MSLPs, two-stage LDRs. The idea is to require only the state variables in an MSLP to follow an LDR, which is sufficient to obtain an approximation of an MSLP that is a two-stage stochastic linear program (2SLP). We similarly propose to apply LDR only to a subset of the variables in the dual of the MSLP, which yields a 2SLP approximation of the dual that provides a lower bound on the optimal value of the MSLP. Although solving the corresponding 2SLP approximations exactly is intractable in general, we investigate how approximate solution approaches that have been developed for solving 2SLP can be applied to solve these approximation problems, and derive statistical upper and lower bounds on the optimal value of the MSLP. In addition to potentially yielding better policies and bounds, this approach requires many fewer assumptions than are required to obtain an explicit reformulation when using the standard static LDR approach. A computational study on two example problems demonstrates that using a two-stage LDR can yield significantly better primal policies and modestly better dual policies than using policies based on a static LDR.
more »
« less
- Award ID(s):
- 1634597
- PAR ID:
- 10113035
- Date Published:
- Journal Name:
- Mathematical Programming
- ISSN:
- 0025-5610
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The level set estimation problem seeks to find all points in a domain where the value of an unknown function 𝑓:→ℝ exceeds a threshold 𝛼 . The estimation is based on noisy function evaluations that may be acquired at sequentially and adaptively chosen locations in . The threshold value 𝛼 can either be explicit and provided a priori, or implicit and defined relative to the optimal function value, i.e. 𝛼=(1−𝜖)𝑓(𝐱∗) for a given 𝜖>0 where 𝑓(𝐱∗) is the maximal function value and is unknown. In this work we provide a new approach to the level set estimation problem by relating it to recent adaptive experimental design methods for linear bandits in the Reproducing Kernel Hilbert Space (RKHS) setting. We assume that 𝑓 can be approximated by a function in the RKHS up to an unknown misspecification and provide novel algorithms for both the implicit and explicit cases in this setting with strong theoretical guarantees. Moreover, in the linear (kernel) setting, we show that our bounds are nearly optimal, namely, our upper bounds match existing lower bounds for threshold linear bandits. To our knowledge this work provides the first instance-dependent, non-asymptotic upper bounds on sample complexity of level-set estimation that match information theoretic lower bounds.more » « less
-
This work considers a Motion Planning Problem with Dynamic Obstacles (MPDO) in 2D that requires finding a minimum-arrivaltime collision-free trajectory for a point robot between its start and goal locations amid dynamic obstacles moving along known trajectories. Existing methods, such as continuous Dijkstra paradigm, can find an optimal solution by restricting the shape of the obstacles or the motion of the robot, while this work makes no such assumptions. Other methods, such as search-based planners and sampling-based approaches can compute a feasible solution to this problem but do not provide approximation bounds. Since finding the optimum is challenging for MPDO, this paper develops a framework that can provide tight lower bounds to the optimum. These bounds act as proxies for the optimum which can then be used to bound the deviation of a feasible solution from the optimum. To accomplish this, we develop a framework that consists of (i) a bi-level discretization approach that converts the MPDO to a relaxed path planning problem, and (ii) an algorithm that can solve the relaxed problem to obtain lower bounds. We also present numerical results to corroborate the performance of the proposed framework. These results show that the bounds obtained by our approach for some instances are up to twice tighter than a baseline approach showcasing potential advantages of the proposed approach.more » « less
-
Ta-Shma, Amnon (Ed.)In a recent work, Gryaznov, Pudlák and Talebanfard (CCC '22) introduced a linear variant of read-once branching programs, with motivations from circuit and proof complexity. Such a read-once linear branching program is a branching program where each node is allowed to make 𝔽₂-linear queries, and is read-once in the sense that the queries on each path is linearly independent. As their main result, they constructed an explicit function with average-case complexity 2^{n/3-o(n)} against a slightly restricted model, which they call strongly read-once linear branching programs. The main tool in their lower bound result is a new type of extractor, called directional affine extractors, that they introduced. Our main result is an explicit function with 2^{n-o(n)} average-case complexity against the strongly read-once linear branching program model, which is almost optimal. This result is based on a new connection from this problem to sumset extractors, which is a randomness extractor model introduced by Chattopadhyay and Li (STOC '16) as a generalization of many other well-studied models including two-source extractors, affine extractors and small-space extractors. With this new connection, our lower bound naturally follows from a recent construction of sumset extractors by Chattopadhyay and Liao (STOC '22). In addition, we show that directional affine extractors imply sumset extractors in a restricted setting. We observe that such restricted sumset sources are enough to derive lower bounds, and obtain an arguably more modular proof of the lower bound by Gryaznov, Pudlák and Talebanfard. We also initiate a study of pseudorandomness against linear branching programs. Our main result here is a hitting set generator construction against regular linear branching programs with constant width. We derive this result based on a connection to Kakeya sets over finite fields.more » « less
-
Abstract In this work we initiate the study of position based quantum cryptography (PBQC) from the perspective of geometric functional analysis and its connections with quantum games. The main question we are interested in asks for the optimal amount of entanglement that a coalition of attackers have to share in order to compromise the security of any PBQC protocol. Known upper bounds for that quantity are exponential in the size of the quantum systems manipulated in the honest implementation of the protocol. However, known lower bounds are only linear. In order to deepen the understanding of this question, here we propose a position verification (PV) protocol and find lower bounds on the resources needed to break it. The main idea behind the proof of these bounds is the understanding of cheating strategies as vector valued assignments on the Boolean hypercube. Then, the bounds follow from the understanding of some geometric properties of particular Banach spaces, their type constants. Under some regularity assumptions on the former assignment, these bounds lead to exponential lower bounds on the quantum resources employed, clarifying the question in this restricted case. Known attacks indeed satisfy the assumption we make, although we do not know how universal this feature is. Furthermore, we show that the understanding of the type properties of some more involved Banach spaces would allow to drop out the assumptions and lead to unconditional lower bounds on the resources used to attack our protocol. Unfortunately, we were not able to estimate the relevant type constant. Despite that, we conjecture an upper bound for this quantity and show some evidence supporting it. A positive solution of the conjecture would lead to stronger security guarantees for the proposed PV protocol providing a better understanding of the question asked above.more » « less