Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Censor-Hillel, Keren; Grandoni, Fabrizio; Ouaknine, Joel; Puppis, Gabriele (Ed.)We prove several results concerning the communication complexity of a collision-finding problem, each of which has applications to the complexity of cutting-plane proofs, which make inferences based on integer linear inequalities. In particular, we prove an Ω(n^{1-1/k} log k /2^k) lower bound on the k-party number-in-hand communication complexity of collision-finding. This implies a 2^{n^{1-o(1)}} lower bound on the size of tree-like cutting-planes refutations of the bit pigeonhole principle CNFs, which are compact and natural propositional encodings of the negation of the pigeonhole principle, improving on the best previous lower bound of 2^{Ω(√n)}. Using the method of density-restoring partitions, we also extend that previous lower bound to the full range of pigeonhole parameters. Finally, using a refinement of a bottleneck-counting framework of Haken and Cook and Sokolov for DAG-like communication protocols, we give a 2^{Ω(n^{1/4})} lower bound on the size of fully general (not necessarily tree-like) cutting planes refutations of the same bit pigeonhole principle formulas, improving on the best previous lower bound of 2^{Ω(n^{1/8})}.more » « lessFree, publicly-accessible full text available January 1, 2026
- 
            We demonstrate a new connection between e-graphs and Boolean circuits. This allows us to adapt existing literature on circuits to easily arrive at an algorithm for optimal e-graph extraction, parameterized by treewidth, which runs in 2^{O(w^2)} poly(w, n) time, where w is the treewidth of the e-graph. Additionally, we show how the circuit view of e-graphs allows us to apply powerful simplification techniques, and we analyze a dataset of e-graphs to show that these techniques can reduce e-graph size and treewidth by 40-80% in many cases. While the core parameterized algorithm may be adapted to work directly on e-graphs, the primary value of the circuit view is in allowing the transfer of ideas from the well-established field of circuits to e-graphs.more » « lessFree, publicly-accessible full text available November 14, 2025
- 
            We prove an Omega(n^{1−1/k} log k /2^k) lower bound on the k-party number-in-hand communication complexity of collision-finding. This implies a 2^{n^{1−o(1)}} lower bound on the size of tree-like cutting-planes proofs of the bit pigeonhole principle, a compact and natural propositional encoding of the pigeonhole principle, improving on the best previous lower bound of 2^{Omega(sqrt{n})}.more » « lessFree, publicly-accessible full text available November 13, 2025
- 
            We prove lower bounds on the time and space required for quantum computers to solve a wide variety of problems involving matrices, many of which have only been analyzed classically in prior work. Using a novel way of applying recording query methods we show that for many linear algebra problems—including matrix-vector product, matrix inversion, matrix multiplication and powering—existing classical time-space tradeoffs also apply to quantum algorithms with at most a constant factor loss. For example, for almost all fixed matrices A, including the discrete Fourier transform (DFT) matrix, we prove that quantum circuits with at most T input queries and S qubits of memory require T=Ω(n^2/S) to compute matrix-vector product Ax for x ∈ {0,1}^n. We similarly prove that matrix multiplication for nxn binary matrices requires T=Ω(n^3/√S). Because many of our lower bounds are matched by deterministic algorithms with the same time and space complexity, our results show that quantum computers cannot provide any asymptotic advantage for these problems at any space bound. We also improve the previous quantum time-space tradeoff lower bounds for n× n Boolean (i.e. AND-OR) matrix multiplication from T=Ω(n^2.5/S^0.5) to T=Ω(n^2.5/S^0.25) which has optimal exponents for the powerful query algorithms to which it applies. Our method also yields improved lower bounds for classical algorithms.more » « less
- 
            On the Rational Degree of Boolean Functions and Applications Vishnu Iyer, Siddhartha Jain, Matt Kovacs-Deak, Vinayak M. Kumar, Luke Schaeffer, Daochen Wang, Michael Whitmeyer We study a natural complexity measure of Boolean functions known as the (exact) rational degree. For total functions f, it is conjectured that rdeg(f) is polynomially related to deg(f), where deg(f) is the Fourier degree. Towards this conjecture, we show that symmetric functions have rational degree at least deg(f)/2 and monotone functions have rational degree at least sqrt(deg(f)). We observe that both of these lower bounds are tight. In addition, we show that all read-once depth-d Boolean formulae have rational degree at least Ω(deg(f)1/d). Furthermore, we show that almost every Boolean function on n variables has rational degree at least n/2−O(sqrt(n)). In contrast to total functions, we exhibit partial functions that witness unbounded separations between rational and approximate degree, in both directions. As a consequence, we show that for quantum computers, post-selection and bounded-error are incomparable resources in the black-box model.more » « less
- 
            Given a function f on (F_2_^n, we study the following problem. What is the largest affine subspace U such that when restricted to U, all the non-trivial Fourier coefficients of f are very small? For the natural class of bounded Fourier degree d functions f : (F_2)^n → [−1, 1], we show that there exists an affine subspace of dimension at least ˜Ω (n^(1/d!) k^(−2)), wherein all of f ’s nontrivial Fourier coefficients become smaller than 2^(−k) . To complement this result, we show the existence of degree d functions with coefficients larger than 2^(−d log n) when restricted to any affine subspace of dimension larger than Ω(dn^(1/(d−1))). In addition, we give explicit examples of functions with analogous but weaker properties. Along the way, we provide multiple characterizations of the Fourier coefficients of functions restricted to subspaces of (F_2)^n that may be useful in other contexts. Finally, we highlight applications and connections of our results to parity kill number and affine dispersers.more » « less
- 
            Cumulative memory – the sum of space used per step over the duration of a computation – is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/poly(n) requires cumulative memory Ω(n^2), any classical matrix multiplication algorithm requires cumulative memory Ω(n^6/T), any quantum sorting circuit requires cumulative memory Ω(n^3/T), and any quantum circuit that finds k disjoint collisions in a random function requires cumulative memory Ω(k^3 n/T^2).more » « less
- 
            Projection maintenance is one of the core data structure tasks. Efficient data structures for projection maintenance have led to recent breakthroughs in many convex programming algorithms. In this work, we further extend this framework to the Kronecker product structure. Given a constraint matrix A and a positive semi-definite matrix W∈R^{n×n} with a sparse eigenbasis, we consider the task of maintaining the projection in the form of B^⊤(BB^⊤)^{−1} B, where B=A(W⊗I) or B=A(W^{1/2}⊗W^{1/2}). At each iteration, the weight matrix W receives a low rank change and we receive a new vector h. The goal is to maintain the projection matrix and answer the query B^⊤(BB^⊤)^{−1} Bh with good approximation guarantees. We design a fast dynamic data structure for this task and it is robust against an adaptive adversary. Following the beautiful and pioneering work of [Beimel, Kaplan, Mansour, Nissim, Saranurak and Stemmer, STOC’22], we use tools from differential privacy to reduce the randomness required by the data structure and further improve the running time.more » « less
- 
            Cumulative memory – the sum of space used per step over the duration of a computation – is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/poly(n) requires cumulative memory Ω(n^2), any classical matrix multiplication algorithm requires cumulative memory Ω(n^6/T), any quantum sorting circuit requires cumulative memory Ω(n^3/T), and any quantum circuit that finds k disjoint collisions in a random function requires cumulative memory Ω(k^3 n/T^2). (Full version of ICALP 2023 paper.)more » « less
- 
            Cumulative memory---the sum of space used per step over the duration of a computation---is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/\poly(n) requires cumulative memory \Omega(n^2), any classical matrix multiplication algorithm requires cumulative memory \Omega(n^6/T) , any quantum sorting circuit requires cumulative memory \Omega(n^3/T) , and any quantum circuit that finds k disjoint collisions in a random function requires cumulative memory \Omega(k^ 3 n/T^2) .more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available