skip to main content


Title: Algorithms yield upper bounds in differential algebra
Consider an algorithm computing in a differential field with several commuting derivations such that the only operations it performs with the elements of the field are arithmetic operations, differentiation, and zero testing. We show that, if the algorithm is guaranteed to terminate on every input, then there is a computable upper bound for the size of the output of the algorithm in terms of the size of the input. We also generalize this to algorithms working with models of good enough theories (including, for example, difference fields). We then apply this to differential algebraic geometry to show that there exists a computable uniform upper bound for the number of components of any variety defined by a system of polynomial PDEs. We then use this bound to show the existence of a computable uniform upper bound for the elimination problem in systems of polynomial PDEs with delays.  more » « less
Award ID(s):
1760448 1853650 1760413 1853482
NSF-PAR ID:
10336668
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Canadian Journal of Mathematics
ISSN:
0008-414X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the problem of designing and analyzing differentially private algorithms that can be implemented on discrete models of computation in strict polynomial time, motivated by known attacks on floating point implementations of real-arithmetic differentially private algorithms (Mironov, CCS 2012) and the potential for timing attacks on expected polynomial-time algorithms. As a case study, we examine the basic problem of approximating the histogram of a categorical dataset over a possibly large data universe X. The classic Laplace Mechanism (Dwork, McSherry, Nissim, Smith, TCC 2006 and J. Privacy \& Confidentiality 2017) does not satisfy our requirements, as it is based on real arithmetic, and natural discrete analogues, such as the Geometric Mechanism (Ghosh, Roughgarden, Sundarajan, STOC 2009 and SICOMP 2012), take time at least linear in |X|, which can be exponential in the bit length of the input.   In this paper, we provide strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy (the maximum error over all bins) matches that of the Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy guarantee. One of our algorithms produces a sparse histogram as output. Its ``"per-bin accuracy" (the error on individual bins) is worse than that of the Laplace Mechanism by a factor of log|X|, but we prove a lower bound showing that this is necessary for any algorithm that produces a sparse histogram. A second algorithm avoids this lower bound, and matches the per-bin accuracy of the Laplace Mechanism, by producing a compact and efficiently computable representation of a dense histogram; it is based on an (n+1)-wise independent implementation of an appropriately clamped version of the Discrete Geometric Mechanism. 
    more » « less
  2. We present a weighted approach to compute a maximum cardinality matching in an arbitrary bipartite graph. Our main result is a new algorithm that takes as input a weighted bipartite graph G(A cup B,E) with edge weights of 0 or 1. Let w <= n be an upper bound on the weight of any matching in G. Consider the subgraph induced by all the edges of G with a weight 0. Suppose every connected component in this subgraph has O(r) vertices and O(mr/n) edges. We present an algorithm to compute a maximum cardinality matching in G in O~(m(sqrt{w} + sqrt{r} + wr/n)) time. When all the edge weights are 1 (symmetrically when all weights are 0), our algorithm will be identical to the well-known Hopcroft-Karp (HK) algorithm, which runs in O(m sqrt{n}) time. However, if we can carefully assign weights of 0 and 1 on its edges such that both w and r are sub-linear in n and wr=O(n^{gamma}) for gamma < 3/2, then we can compute maximum cardinality matching in G in o(m sqrt{n}) time. Using our algorithm, we obtain a new O~(n^{4/3}/epsilon^4) time algorithm to compute an epsilon-approximate bottleneck matching of A,B subsetR^2 and an 1/(epsilon^{O(d)}}n^{1+(d-1)/(2d-1)}) poly log n time algorithm for computing epsilon-approximate bottleneck matching in d-dimensions. All previous algorithms take Omega(n^{3/2}) time. Given any graph G(A cup B,E) that has an easily computable balanced vertex separator for every subgraph G'(V',E') of size |V'|^{delta}, for delta in [1/2,1), we can apply our algorithm to compute a maximum matching in O~(mn^{delta/1+delta}) time improving upon the O(m sqrt{n}) time taken by the HK-Algorithm. 
    more » « less
  3. null (Ed.)
    The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We find tight or nearly tight bounds on the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following. k-Distinctness: For any constant k, the approximate degree and quantum query complexity of the k-distinctness function is Ω(n3/4−1/(2k)). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of k-distinctness is O(n3/4−1/(2k+2−4)). Image size testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n]→[n] is Ω~(n1/2). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. k-Junta testing: A tight Ω~(k1/2) lower bound for k-junta testing, answering the main open question of Ambainis et al. (SODA 2016). Statistical distance from uniform: A tight Ω~(n1/2) lower bound for approximating the statistical distance of a distribution from uniform, answering the main question left open by Bravyi et al. (STACS 2010 and IEEE Trans. Inf. Theory 2011). Shannon entropy: A tight Ω~(n1/2) lower bound for approximating Shannon entropy up to a certain additive constant, answering a question of Li and Wu (2017). Surjectivity: The approximate degree of the surjectivity function is Ω~(n3/4). The best prior lower bound was Ω(n2/3). Our result matches an upper bound of O~(n3/4) due to Sherstov (STOC 2018), which we reprove using different techniques. The quantum query complexity of this function is known to be Θ(n) (Beame and Machmouchi, Quantum Inf. Comput. 2012 and Sherstov, FOCS 2015). Our upper bound for surjectivity introduces new techniques for approximating Boolean functions by low-degree polynomials. Our lower bounds are proved by significantly refining techniques recently introduced by Bun and Thaler (FOCS 2017). 
    more » « less
  4. We generalize Hermite interpolation with error correction, which is the methodology for multiplicity algebraic error correction codes, to Hermite interpolation of a rational function over a field K from function and function derivative values. We present an interpolation algorithm that can locate and correct <= E errors at distinct arguments y in K where at least one of the values or values of a derivative is incorrect. The upper bound E for the number of such y is input. Our algorithm sufficiently oversamples the rational function to guarantee a unique interpolant. We sample (f/g)^(j)(y[i]) for 0 <= j <= L[i], 1 <= i <= n, y[i] distinct, where (f/g)^(j) is the j-th derivative of the rational function f/g, f, g in K[x], GCD(f,g)=1, g <= 0, and where N = (L[1]+1)+...+(L[n]+1) >= C + D + 1 + 2(L[1]+1) + ... + 2(L[E]+1) where C is an upper bound for deg(f) and D an upper bound for deg(g), which are input to our algorithm. The arguments y[i] can be poles, which is truly or falsely indicated by a function value infinity with the corresponding L[i]=0. Our results remain valid for fields K of characteristic >= 1 + max L[i]. Our algorithm has the same asymptotic arithmetic complexity as that for classical Hermite interpolation, namely soft-O(N). For polynomials, that is, g=1, and a uniform derivative profile L[1] = ... = L[n], our algorithm specializes to the univariate multiplicity code decoder that is based on the 1986 Welch-Berlekamp algorithm. 
    more » « less
  5. Writing concurrent programs is notoriously hard due to scheduling non-determinism. The most common concurrency bugs are data races, which are accesses to a shared resource that can be executed concurrently. Dynamic data-race prediction is the most standard technique for detecting data races: given an observed, data-race-free trace t, the task is to determine whether t can be reordered to a trace t* that exposes a data-race. Although the problem has received significant practical attention for over three decades, its complexity has remained elusive. In this work, we address this lacuna, identifying sources of intractability and conditions under which the problem is efficiently solvable. Given a trace t of size n over k threads, our main results are as follows. First, we establish a general O(k · n2·(k-1) upper-bound, as well as an O(nk) upper-bound when certain parameters of t are constant. In addition, we show that the problem is NP-hard and even W[1]-hard parameterized by k, and thus unlikely to be fixed-parameter tractable. Second, we study the problem over acyclic communication topologies, such as server-clients hierarchies. We establish an O(k2 · d · n2 · log n) upper-bound, where d is the number of shared variables accessed in t. In addition, we show that even for traces with k = 2 threads, the problem has no O(n2-ϵ) algorithm under the Orthogonal Vectors conjecture. Since any trace with 2 threads defines an acyclic topology, our upper-bound for this case is optimal up to polynomial improvements for up to moderate values of k and d. Finally, motivated by existing heuristics, we study a distance-bounded version of the problem, where the task is to expose a data race by a witness trace that is similar to t. We develop an algorithm that works in O(n) time when certain parameters of t are constant. 
    more » « less