skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Subset Sum in Time 2^{n/2}/poly(n)
A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) n-input Subset Sum problem that runs in time 2^{(1/2 - c)n} for some constant c > 0. In this paper we give a Subset Sum algorithm with worst-case running time O(2^{n/2} ⋅ n^{-γ}) for a constant γ > 0.5023 in standard word RAM or circuit RAM models. To the best of our knowledge, this is the first improvement on the classical "meet-in-the-middle" algorithm for worst-case Subset Sum, due to Horowitz and Sahni, which can be implemented in time O(2^{n/2}) in these memory models [Horowitz and Sahni, 1974]. Our algorithm combines a number of different techniques, including the "representation method" introduced by Howgrave-Graham and Joux [Howgrave-Graham and Joux, 2010] and subsequent adaptations of the method in Austrin, Kaski, Koivisto, and Nederlof [Austrin et al., 2016], and Nederlof and Węgrzycki [Jesper Nederlof and Karol Wegrzycki, 2021], and "bit-packing" techniques used in the work of Baran, Demaine, and Pǎtraşcu [Baran et al., 2005] on subquadratic algorithms for 3SUM.  more » « less
Award ID(s):
2106429
PAR ID:
10501384
Author(s) / Creator(s):
; ; ;
Editor(s):
Megow, Nicole; Smith, Adam
Publisher / Repository:
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)
Date Published:
Subject(s) / Keyword(s):
Exact algorithms subset sum log shaving Theory of computation → Design and analysis of algorithms
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We give new quantum algorithms for evaluating composed functions whose inputs may be shared between bottom-level gates. Let f be an m -bit Boolean function and consider an n -bit function F obtained by applying f to conjunctions of possibly overlapping subsets of n variables. If f has quantum query complexity Q ( f ) , we give an algorithm for evaluating F using O ~ ( Q ( f ) ⋅ n ) quantum queries. This improves on the bound of O ( Q ( f ) ⋅ n ) that follows by treating each conjunction independently, and our bound is tight for worst-case choices of f . Using completely different techniques, we prove a similar tight composition theorem for the approximate degree of f .By recursively applying our composition theorems, we obtain a nearly optimal O ~ ( n 1 − 2 − d ) upper bound on the quantum query complexity and approximate degree of linear-size depth- d AC 0 circuits. As a consequence, such circuits can be PAC learned in subexponential time, even in the challenging agnostic setting. Prior to our work, a subexponential-time algorithm was not known even for linear-size depth-3 AC 0 circuits.As an additional consequence, we show that AC 0 ∘ ⊕ circuits of depth d + 1 require size Ω ~ ( n 1 / ( 1 − 2 − d ) ) ≥ ω ( n 1 + 2 − d ) to compute the Inner Product function even on average. The previous best size lower bound was Ω ( n 1 + 4 − ( d + 1 ) ) and only held in the worst case (Cheraghchi et al., JCSS 2018). 
    more » « less
  2. The Lovász Local Lemma (LLL) is a cornerstone principle in the probabilistic method of combinatorics, and a seminal algorithm of Moser & Tardos (2010) provides an efficient randomized algorithm to implement it. This algorithm can be parallelized to give an algorithm that uses polynomially many processors and runs in O(log3 n) time, stemming from O(log n) adaptive computations of a maximal independent set (MIS). Chung et al. (2014) developed faster local and parallel algorithms, potentially running in time O (log^2 n), but these algorithms work under significantly more stringent conditions than the LLL. We give a new parallel algorithm that works under essentially the same conditions as the original algorithm of Moser & Tardos but uses only a single MIS computation, thus running in O(log^2 n) time. This conceptually new algorithm also gives a clean combinatorial description of a satisfying assignment which might be of independent interest. Our techniques extend to the deterministic LLL algorithm given by Chandrasekaran et al. (2013) leading to an NC-algorithm running in time O(log^2 n) as well. We also provide improved bounds on the runtimes of the sequential and parallel resampling-based algorithms originally developed by Moser & Tardos. Our bounds extend to any problem instance in which the tighter Shearer LLL criterion is satisfied. We also improve on the analysis of Kolipaka & Szegedy (2011) to give tighter concentration results. 
    more » « less
  3. Woodruff, David P. (Ed.)
    We give improved algorithms for maintaining edge-orientations of a fully-dynamic graph, such that the maximum out-degree is bounded. On one hand, we show how to orient the edges such that maximum out-degree is proportional to the arboricity $$\alpha$$ of the graph, in, either, an amortised update time of $$O(\log^2 n \log \alpha)$$, or a worst-case update time of $$O(\log^3 n \log \alpha)$$. On the other hand, motivated by applications including dynamic maximal matching, we obtain a different trade-off. Namely, the improved update time of either $$O(\log n \log \alpha)$$, amortised, or $$O(\log ^2 n \log \alpha)$$, worst-case, for the problem of maintaining an edge-orientation with at most $$O(\alpha + \log n)$$ out-edges per vertex. Finally, all of our algorithms naturally limit the recourse to be polylogarithmic in $$n$$ and $$\alpha$$. Our algorithms adapt to the current arboricity of the graph, and yield improvements over previous work: Firstly, we obtain deterministic algorithms for maintaining a $$(1+\varepsilon)$$ approximation of the maximum subgraph density, $$\rho$$, of the dynamic graph. Our algorithms have update times of $$O(\varepsilon^{-6}\log^3 n \log \rho)$$ worst-case, and $$O(\varepsilon^{-4}\log^2 n \log \rho)$$ amortised, respectively. We may output a subgraph $$H$$ of the input graph where its density is a $$(1+\varepsilon)$$ approximation of the maximum subgraph density in time linear in the size of the subgraph. These algorithms have improved update time compared to the $$O(\varepsilon^{-6}\log ^4 n)$$ algorithm by Sawlani and Wang from STOC 2020. Secondly, we obtain an $$O(\varepsilon^{-6}\log^3 n \log \alpha)$$ worst-case update time algorithm for maintaining a $$(1~+~\varepsilon)\textnormal{OPT} + 2$$ approximation of the optimal out-orientation of a graph with adaptive arboricity $$\alpha$$, improving the $$O(\varepsilon^{-6}\alpha^2 \log^3 n)$$ algorithm by Christiansen and Rotenberg from ICALP 2022. This yields the first worst-case polylogarithmic dynamic algorithm for decomposing into $$O(\alpha)$$ forests. Thirdly, we obtain arboricity-adaptive fully-dynamic deterministic algorithms for a variety of problems including maximal matching, $$\Delta+1$$ colouring, and matrix vector multiplication. All update times are worst-case $$O(\alpha+\log^2n \log \alpha)$$, where $$\alpha$$ is the current arboricity of the graph. For the maximal matching problem, the state-of-the-art deterministic algorithms by Kopelowitz, Krauthgamer, Porat, and Solomon from ICALP 2014 runs in time $$O(\alpha^2 + \log^2 n)$$, and by Neiman and Solomon from STOC 2013 runs in time $$O(\sqrt{m})$$. We give improved running times whenever the arboricity $$\alpha \in \omega( \log n\sqrt{\log\log n})$$. 
    more » « less
  4. Aichholzer, Oswin; Wang, Haitao (Ed.)
    The 𝓁₂² min-sum k-clustering problem is to partition an input set into clusters C_1,…,C_k to minimize ∑_{i=1}^k ∑_{p,q ∈ C_i} ‖p-q‖₂². Although 𝓁₂² min-sum k-clustering is NP-hard, it is not known whether it is NP-hard to approximate 𝓁₂² min-sum k-clustering beyond a certain factor. In this paper, we give the first hardness-of-approximation result for the 𝓁₂² min-sum k-clustering problem. We show that it is NP-hard to approximate the objective to a factor better than 1.056 and moreover, assuming a balanced variant of the Johnson Coverage Hypothesis, it is NP-hard to approximate the objective to a factor better than 1.327. We then complement our hardness result by giving a fast PTAS for 𝓁₂² min-sum k-clustering. Specifically, our algorithm runs in time O(n^{1+o(1)}d⋅ 2^{(k/ε)^O(1)}), which is the first nearly linear time algorithm for this problem. We also consider a learning-augmented setting, where the algorithm has access to an oracle that outputs a label i ∈ [k] for input point, thereby implicitly partitioning the input dataset into k clusters that induce an approximately optimal solution, up to some amount of adversarial error α ∈ [0,1/2). We give a polynomial-time algorithm that outputs a (1+γα)/(1-α)²-approximation to 𝓁₂² min-sum k-clustering, for a fixed constant γ > 0. 
    more » « less
  5. We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in \Otil(n^{5/3 }m^{1/3}) time\footnote{The \Otil(\cdot) notation hides \poly(\log n) factors}. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^\omega). For the special case of unweighted graphs, this improves upon the best previously known running time of \tilde{O}(\min\{n^{\omega},m\sqrt{n},m^{4/3}\}) for m >> n^{7/4} (Colbourn et al. '96, Kelner-Madry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute \eps-approximate effective resistances for a set SS of vertex pairs via approximate Schur complements in \Otil(m+(n + |S|)\eps^{-2}) time, without using the Johnson-Lindenstrauss lemma which requires \Otil( \min\{(m + |S|)\eps^{-2}, m+n\eps^{-4} +|S|\eps^{-2}\}) time. We combine this approximation procedure with an error correction procedure for handing edges where our estimate isn't sufficiently accurate. 
    more » « less